Evaluating The Reliability And Validity Of The Results Research Paper

PAGES
12
WORDS
3885
Cite
Related Topics:

¶ … Reliability and Validity of the Results Accuracy

Precision

Validity

Reliability

Analyzing Internal Validity of the Research Design

Maturation

Testing

Statistical Regression

Selection

Experimental Mortality

Analyzing External Validity of the Research Design

Interaction

Pretesting

Multiple Treatments or Interventions

Recommendation of a Better Design

Evaluate the Reliability and Validity of the Results

In the Washington Post (2015), an article titled "How was Sexual Assault Measured," by Scott Clement, there are many factors needed to be evaluated in order to better test the true accuracy, precision, validity and reliability of the survey presented. This particular post outlines the research survey undertaken by Kaiser Family Foundation, which assessed the extent and prevalence of sexual assault. The results of the study brought about the conclusion that 20% of the prevailing and recent female students in college, that subside within the campus or near it have reported of being sexually assaulted during this time as they attend school (Clement). The following section will make an attempt to evaluate and assess the reliability and validity of these results. Subsequently, it will point out a number of flaws and faults perceived in the research itself.

Accuracy

Firstly, let us examine the accuracy of this article by seeking the degree to which the measurement used in the survey represents the true value of what it is the surveyor is looking for. In the case of sexual assault, the topic is ambiguous, and according to the article, "measuring the prevalence of sexual assault is a tricky task in surveys for two reasons." The first being that asking about sexual assault plainly could result in producing unreliable results because the definition of what sexual assault entails may be relative, and not a common belief among all individuals. The second is that sexual assault is a highly sensitive topic and respondents may not be willing to report their true responses. It can be argued that the telephone interviews and survey were undertaken because access to the participants was minimal. However, this in itself becomes a flaw in the article. It is imperative to note that accuracy is sensitive to change, particularly with regard to detail, such as the dates, people present and the like. The fact that the interviews were undertaken through telephone makes it difficult to ascertain the information being given by the participants. The survey undertaken makes the mistake of making the assumption of accepting that the information being given is reliable and accurate. It is important to always question implicitly the responses to the questions being asked and look for indications of deception or self-deception by the participants. In this case, this was not possible. This is because telephone interviews make it difficult to look at the participants and make a determination as to whether they are telling the truth or they have an indication of deception. Solely listening to the voice of the participants to make this determination indicates that the information obtained is not accurate.

Precision

Taking into consideration the aforementioned aspects, we must also examine the precision aspect of what it is we are seeking, which is basically the degree to which the results of this study will resemble those of other studies under similar circumstances. In the article, it states that "the post-Kaiser survey found that 20% current and recent female students report being assaulted by force or while incapacities, compared with the 13.7% in the 2007 survey among current college students only." In this case of comparing these two surveys, we would not be able to assume that there is a random error because these surveys may have been perceived completely differently, even though their goal may have been very common. There is no certainty that the questions have been asked in the same exact manner and fashion. A lack of precision would imply a random error, but in this case, unless the questions were identically stated, we cannot determine if one exists, or does it state in the article if there were any precision issues between the individual surveys.

Validity

In definition, validity seeks to make a determination as to whether the research truly gives a measure of what it was intended and purposed to measure or also how truthful the results of the research are. In other words, this is to question whether the research instrument used enables the researcher to hit the mark of the research object. When it comes to validity, we must make sure that the study itself measures exactly what it is intended to measure....

...

In the article, the terminology of sexual assault is broken down specifically into 5 different types of assault, (1) forced sexual touching, (2) oral sex, (3) sexual intercourse, (4) anal penetration and (5) sexual penetration with a finger or object. This way, the term itself is not as subjective, nor ambiguous. Respondents have a better understanding of what it is they are asked, and surveyors have a better way of analyzing their data (Creswell and Miller, 125).
Reliability

Lastly, reliability also plays a key role. Reliability can be defined as the magnitude to which the results obtained are consistent over time and have an accurate or precise representation of the total population under the research study. More so, if the results and outcomes attained in the study can be reproduced yet again under a similar methodology, then the instrument of the research is considered to be reliable. It seeks to discover how one will get the same values if the measurement is repeated. Taking all factors constant such as trauma, fear, and other aspects, the participants can be expected to provide dissimilar results from the response of the same questions if an identical methodology is undertaken. There are some flaws that can be perceived, in particular, with regard to this article (Golafshani, 602).

Three elements are considered when assessing this aspect. The first one is the magnitude to which the measurement, given incessantly, remains the same. The second one is the stability of the measurement over time, and third of all, the similarity of measurements within a given time period. It is imperative to note that consistency is the main measure of reliability. One of the main flaws with regard to the article, concerning the article by Clement is the aspect of proximity to the events. The survey that was undertaken by Post-Kaiser did not take this aspect into consideration. This is imperative for the reason that events that took place a long time ago could be changed and not given an accurate and precise description of what happened during the occasion. Therefore, this questions the reliability of the study (Golafshani, 600).

In overall, there are a number of elements that can be perceived to have flaws within the research article. Some of these flaws include the use of telephone interviews rather than face-to-face interviews. This brings a limitation in determining whether the responses given by the participants have any form of deception. Analyzing responses through the telephone is quite difficult and the participants can easily give false or altered answers. Another important element is assessing the time period from the event. This is because events that took place far long from the time of the interview do not give a precise indication and are bound to be changed by the participants. In addition, sensitivity is another element as sexual assault is a highly sensitive topic and respondents may not be willing to report their true responses (Kirk and Miller, 52).

Part 2

The following section will analyze the internal validity and external validity of the case study "Marketing Analysts and promotional Specialists, Inc." Subsequently, a recommendation for a better research design will be given to take into account any flaws that might arise.

Analyzing Internal Validity of the Research Design

In accordance to McDaniel et al. (Chapter 10), internal validity is defined as the magnitude to which the contending explanations and justifications for the experimental results witnessed can be dismissed and not taken into consideration. In other words, internal validity seeks to ascertain how well one can make the ascertainment that the Y variable, which is the dependent variable, is caused by the X variable, which is the independent variable, and not other factors. By critically evaluating and assessing the research design, it is imperative to establish the dependent variable and the independent variables. The dependent variable, Y, is the sales of the different beers included in the research experiment. On the other hand, the independent variable is the placement of the products on the shelf, either in the fridge or along the aisles. The degree of control exercised over prospective extraneous variables decides the level of internal validity. When undertaking an evaluation of internal validity of a study, the main consideration is whether the conclusions or outcomes follow the data as well as the procedures used.

For starters, with regard to the research in the case study being analyzed, there is treatment effect and therefore there is a cause-and-effect relationship to assess. As is indicated, the sales increased in treatment 2 by 5% as a result of the change in the allocation of where the Dixie beer was placed. It was well documented that the cold beer be placed at the…

Sources Used in Documents:

Works Cited

Blanche, Martin T., Durrheim, K., and Painter, Desmond. "Research in Practice: Applied Methods for the Social Sciences." New York: UCT Press, 2003.

Bracht, Glenn H., and Gene V. Glass. "The external validity of experiments." American educational research journal (1968): 437-474.

Clement, Martin. "How was sexual assault measured?" The Washington Post, 2015.

Creswell, J. W. & Miller, D. L. "Determining validity in qualitative inquiry." Theory into Practice, 2000, 39(3), 124-131.


Cite this Document:

"Evaluating The Reliability And Validity Of The Results" (2016, March 16) Retrieved April 20, 2024, from
https://www.paperdue.com/essay/evaluating-the-reliability-and-validity-2159272

"Evaluating The Reliability And Validity Of The Results" 16 March 2016. Web.20 April. 2024. <
https://www.paperdue.com/essay/evaluating-the-reliability-and-validity-2159272>

"Evaluating The Reliability And Validity Of The Results", 16 March 2016, Accessed.20 April. 2024,
https://www.paperdue.com/essay/evaluating-the-reliability-and-validity-2159272

Related Documents

Reliability, validity and norming sample populations play critical roles in the usefulness of assessment instruments used in forensics assessments. These three facets of assessment help to determine whether or not the results the assessment yield is credible. Additionally, they each help to evaluate a particular aspect of an instrument, although there is generally a degree of correlation between these factors. Validity is simply the accuracy of a test to effectively measure

moderate impairment), while dependent variables included the levels of measured performance on the test. Operationalization involved demonstrating the ability to perform the tasks of daily life. Simple cooking was tested by asking the test subject to cook oatmeal; using a telephone was tested by requiring the subject to inquire about grocery delivery on the phone; and the test subject was required to select and administer medications correctly and select

Reliability of Test Reliability is defined by Joppe (2002,p.1) as the level of consistency of the obtained results over a period of time as well as an accurate representation of the population under study. If the outcome of the study can be reproduced using a similar methodology then the instrument used in the research are said to be reliable. It is worth noticing that there is an element of replicability as well

Validity & Reliability Review The author of this report has been asked to find and select an article with a specific purpose in mind. Namely, the author of the report is supposed to review the article for implications regarding validity and reliability. To be more precise, if there are gaps in either, the author of this report is to identify them and then identify what could be done to avoid such

P.). For the classroom teacher, an instrument with validity will satisfy these parameters. Content-Based Assessments 1) What evidence should be provided that learners have mastered content? When teachers give content-based assessments, they are measuring how much information students have retained from lectures, discussions, readings and other learning experiences (e.g., homework, projects). In creating a content-based assessment, the teacher must look at all the learning materials and experiences that have taken place during the

For example, a test that requires students to make use of vocabulary words only pertinent to certain areas of the country, whether rural or urban (a city child may have never seen a cow, or know that a cow and a bull are the same animal) might result in poorer assessment of that child than is warranted. A Caucasian child might not be asked to describe common Vietnamese foods,