Essay Undergraduate 1,201 words Human Written

Numerical Research That Can Be

Last reviewed: ~6 min read Science › Quantitative Research
80% visible
Read full paper →
Paper Overview

¶ … numerical research that can be analyzed in a statistical fashion. Quantitative research frequently -- although not exclusively -- deploys the scientific method whereby a hypothesis is tested in a controlled fashion. One group, the experimental group, is subjected to an intervention known as the independent variable while another, otherwise...

Writing Guide
How to Write a Research Proposal

Abstract In this tutorial essay, we are going to tell you everything you need to know about writing research proposals.  This step-by-step tutorial will begin by defining what a research proposal is.  It will describe the format for a research proposal.  We include a template...

Related Writing Guide

Read full writing guide

Related Writing Guides

Read Full Writing Guide

Full Paper Example 1,201 words · 80% shown · Sign up to read all

¶ … numerical research that can be analyzed in a statistical fashion. Quantitative research frequently -- although not exclusively -- deploys the scientific method whereby a hypothesis is tested in a controlled fashion. One group, the experimental group, is subjected to an intervention known as the independent variable while another, otherwise similar group, is designated the control group and not subjected to that variable. The dependent variable is the change or lack of change that results from the intervention, and the results prove or disprove the initial hypothesis.

Quantitative research can also take the form of a survey or other instrument designed to collect raw data about a particular population. In contrast, qualitative research is designed to explore the evolution of a particular phenomenon in narrative form. Responses from test subjects may be coded and subjected to data analysis, but ultimately the goal of this type of research is to record the particular experience of a population in a holistic fashion, not test a theory within limited parameters.

This contrast means that qualitative research is often seen as subjective, versus the superior objective claims of quantitative methodologies. However, there are many persistent problems with quantitative research that complicate this schematic notion. It has been observed that "poor statistics" make for "poor science" but this is true of all disciplines: indeed, in the social sciences, where variables are more difficult to isolate within populations, rigorous statistical methodology to eliminate error is even more significant (Gardenier & Resnik 2002: 70).

Also, in quantitative research, using effective statistical testing is vital, regardless of the experiment, given the ethical implications of having human subjects, take the risk of participating in a study with questionable utility and value (Gardenier & Resnik 2002: 66). In an experiment involving statistical analysis of a population, the formal 'null hypothesis' is tested (the theory that nothing will happen). The null is actually a statement that is contrary to what researchers want to prove.

In general, it is assumed that false rejection of the null hypothesis is less damaging than false acceptance -- i.e., it is thought that overestimating the potential impact of a variable is less troubling than not recognizing its impact (Baroudi & Orlikowski 1989: 88). "The embedded null approach involves embedding a hypothesis of no effect within an interaction framework.

The framework is then used to show that, under certain conditions, the manipulation/predictor variable in question does produce an effect or relationship, while under the conditions of primary interest, the effect or relationship does not appear" (Cortina 2002: 342). The cautious approach to tracking change makes sense given that the selection of the test population may be imperfect and contain too many outliers. That is why a 'statistically significant' alteration must be in evidence, not simply any change at all.

"The reason that we avoid concluding a lack of effect from studies that show minimal or non-significant results is that there are many alternative explanations for this finding" (Cortina 2002: 343). Particularly in the social sciences, it can be difficult to isolate all variables that could potentially impact the test population, given that human beings live in the 'real world' and are the sum of lived experiences that cannot necessarily be controlled by the test's creator.

Only "if one can rule out some of these alternative explanations" as to why a phenomenon occurred while "offering sound theoretical justification for a hypothesis of no effect, then there is no reason to avoid no effect conclusions" (Cortina 2002: 343). As well as the statistical analysis itself, the ways in which groups are allocated and designed can also yield a powerful impact upon results, underlining the importance of test construction. This is yet another reason we cannot assume that data is 'objective' because it is quantitative in nature.

For example, when constructing an experiment "an extreme groups design (e.g., assigning participants to high or low conditions) maximizes the variances of the components of the product term, it also results in much more power with respect to the interaction effect than would the corresponding observational design" (Cortina 2002: 343). Conversely, doing an experiment 'in the field' is likely to yield a less statistically-significant impact because of the inability to control the extremity of the variables.

A recent study of the statistical power of research in the social sciences revealed that only 40% of all MIS studies had adequate statistical power to ensure that the probability that the null hypothesis would be rejected correctly at all times (Baroudi & Orlikowski 1989: 87). Significance criteria, sample estimate, and effect size, can all influence statistical power and once again, when dealing with human subjects, many additional variables can affect statistical power (Baroudi & Orlikowski 1989: 87). The use of certain statistical conventions can also yield inaccurate results, if deployed in an inappropriate fashion.

For example, disregarding 'outliers' or extremes that impact the findings is a common practice and may be appropriate or inappropriate, depending upon the circumstances, as can filling in missing results to enable the statistical analysis to be done in the first place (Gardenier & Resnik 2002: 68). If the outliers are not genuine 'outliers' that can be explained convincingly as such or the missing data cannot be extrapolated easily, it can produce wildly inaccurate results. Sometimes the misuse of quantitative data is unintentional, other times it is deliberate.

In some instances, data may be obtained in a fraudulent and unethical manner, deliberately designed to produce a particular, false result (such as editing, cleaning, or mining data) or aspects of how the data was compiled or analyzed may be concealed to likewise encourage persons to draw inaccurate conclusions (Gardenier & Resnik 2002: 70). Misuse is commonly divided into two categories: falsification, in which real data is manipulated and fabrication, in which data.

241 words remaining — Conclusions

You're 80% through this paper

The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.

$1 full access trial
130,000+ paper examples AI writing assistant included Citation generator Cancel anytime
Sources Used in This Paper
source cited in this paper
6 sources cited in this paper
Sign up to view the full reference list — includes live links and archived copies where available.
Cite This Paper
"Numerical Research That Can Be" (2013, August 01) Retrieved April 19, 2026, from
https://www.paperdue.com/essay/numerical-research-that-can-be-93861

Always verify citation format against your institution's current style guide.

80% of this paper shown 241 words remaining