The research methods and design of this non-experimental qualitative study are all aligned to the goal of investigating the factors affecting cloud computing adoption by general user's perception of cloud technologies. The research questions, population and sample size definition which are essential to any effective methodology, are predicated on the observation that the associated technologies that comprise cloud computing have significant potential to provide humanitarian and accelerate educational attainment on a global scale. The technology components that comprise cloud computing including Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS) must be tightly orchestrated together to deliver applications online that people can use. Performance and responsiveness of cloud computing applications, the majority of which are installed as SaaS-based software, will also need to be measured in this study, as their performance will be a predicator of general user's perception of this technology. The scalability of cloud-based applications will also be one of the factors captured in the study as that will also be a predictor of well the applications meet users' perceptions of minimal performance or not. It is very difficult for cloud computing application providers to ensure their applications are scalable, or can serve many people at the same time. As this study concentrates on the factors affecting cloud computing adoption, one of the most critically important will be performance of the application itself, the depth of its features and functions, the applicability of the technology to humanitarian and educational needs.
Population and Sample
With the study being designed as non-experimental qualitative in scope, the selection of a target population and sample needs to be inclusive enough to capture a sufficiently wide breadth of users yet focused enough to not miss the most critical audiences who are influencers of cloud computing adoption. The intended research population therefore spans the spectrum of users across the value chain of cloud computing providers' customer bases. This includes the consumers or users of the services including university students and faculty, in addition to the many IT professionals and managers who enable cloud computing-based applications to be successfully used and updated on a regular basis.
With a value-chain-based focus to defining the research population and sample, the following specific audiences comprise the research population. Convenience sampling using randomization will be used with the goal of capturing as broad of a cross-section as possible of each of these audiences. The audiences of this study include the following: public subscribers to cloud-based services including online storage provider box.net and users of other cloud services; university students and faculty who regularly use a wide variety of cloud computing applications in their daily routines; institutional chief information officers (CIOs), IT managers and their engineering teams and staffs; and major IT service providers' engineering teams who provide the service.
Sampling of each of these audiences will be completed on a randomized basis using convenience sampling to ensure representativeness. The data sets of complete population definition by professional group and IT teams is not available for confidentiality reasons; IT providers are not providing this type of data as it is considered highly confidential and essential to their competitive position in the marketing. Using convenience sampling is also consistent with the qualitative aspects of the study as well, and will be more efficient from time a standpoint to implement. It is also a technique that will circumvent the data restrictions that cloud computing providers have on their employment data, in addition to the tightly controlled data that Internet Service Providers (ISPs) rely on to manage their teams and the level of service they are delivering.
Using a randomized convenience sampling technique allows for greater flexibility in capturing data using a variety of instrumentation approaches. Data collection techniques will range from in-person interviews to e-mail-based questionnaires that will be sent to the IT management and leadership audiences that comprise the sampling frame. The sampling procedure for the audiences of cloud computing application users, students and faculty will be in person with a printed survey that the respondents will assist in filling out. Using an iPad with the survey running on it will be useful for quickly tabulating results using SurveyMonkey or a comparable online survey tool. These in-person interviews will be arranged through advance telephone calls and respondent recruitment with students and faculty. Randomizing each audiences' total pool of respondents will be accomplished before the actual interviews to further ensure representativeness of each specific audience interviewed.
For the IT management, CIO and IT support respondents, e-mail-based surveys will be used that include an online survey link each respondent can use. Incentive to complete the survey will be the offer of a summary of the results. This incentive often works well for gaining IT professionals' participation as this profession is often interested in industry surveys of cloud computing adoption.
There will be two forms of instrumentation in this study, the first being a printed survey that seeks to capture data around the six questions including the following:
RQ1: How does IT marketing tactics affect cloud computing adoption?
RQ2: How does migration strategy from traditional systems affect cloud computing adoption?
RQ3: How does data type (sensitive/non-sensitive) effect cloud computing adoption?
RQ4: How does the data storage location affect cloud computing adoption?
RQ5: What does utility service agreements affect cloud computing adoption?
RQ6: Should the use of an internal cloud be considered a valid component of cloud computing?
The first form of instrumentation will be a paper-based survey that will be used for interviewing the existing online subscribers, students and faculty. The logic of using a paper-based survey is predicated on capturing data quickly and easily, and also giving each respondent an opportunity to review the questions as the survey is progressing. Qualitative studies also often have the survey instruments on an Apple iPad or comparable device to streamline data collection as well. That will be attempted for the in-person interviews, in conjunction with the paper survey to further increase the level of communication accuracy and clarity.
The second form of instrumentation will be an online survey created in SurveyMonkey, Zoomerang or comparable online research application which are also cloud-based. These online survey tools are easily used and learned, and will be critically important for gaining responses from IT professionals located in diverse locations that make in-person interviews nearly impossible. Relying on online surveys also will accelerate the data collection and analysis phase of the study as well. Both SurveyMonkey and Zoomerang have statistical analysis tools built into their applications which will save many hours of data analysis time by being able to take the total base of survey responses and quickly aggregate them.
For any qualitative research study to be meaningful there needs to be a sufficient level of rigor applied to its research design, methodology, implementation, analysis and use of results. The intent of qualitative research is to gain insights into how the accumulative effects of specific social situations, roles, groups, performance of emergent or nascent technologies including cloud computing and interactions of teams all combine to deliver new knowledge not available before.
Qualitative researchers therefore need to rely on credibility criterion and persistent observation to ensure the assumptions made, data collections and aggregated and conclusions made are defensible from a validity standpoint. Only by choosing to rely on credibility criterion and persistent observation can a researcher ensure that their qualitative study is ethically and structurally sound. Credibility criterion is very comparable to internal validity, with the focus being on how to create connections between responses of expertise (in this study the CIOs and IT management teams) and the users of the services (subscribers, students and faculty), with both groups of responses compared to the literature review findings that serve as one of the knowledgebases of this study. Validity is achieved when there is complete congruence between these factors and the research results provide new insights into the field that has a sufficient knowledgebase that can support research assumptions and their results.
The dependability of the research study is measured in how reliable the results capture actual behavior, interrelationships or dynamics in the broader population of interest. Reliability form the standpoint of qualitative studies is predicated on having a clear definition of the audiences involved in the study, sufficient sampling and randomization including convenience sampling that lacks selection bias, and the need for ensuring analysis and recommendations are devoid of researcher bias.
Despite the lack of statistical sampling rigor and definition of a complete population that defines a defensible research design, qualitative research can also reach reliable results. Concentrating on the quality and depth of insight gained during interviews, using open-ended questions, allowing respondents to expand on their perceptions, and most importantly, capturing and quantifying their attitudinal rankings of cloud computing application performance, qualitative-based studies can achieve the same level of data value as quantitative studies. The key to attaining a high degree of reliability in qualitative studies is to stay consistent and congruent to the audience decisions, followed with a research instrument that captures the specific…