Program Evaluation Design Term Paper

Program Evaluation Design constructive, practical, principled and correct process in an organized manner is necessary towards ameliorating and contributing for an efficient program assessment. It has been planned to structure and systematize vital components of program assessment being a realistic and non-rigid instrument. For efficient program assessment, the outline encompasses actions and principles in program assessment. Abiding by the actions and principles of this structure will let an appreciation of every program's perspective and will perk up how program assessments are perceived and accomplished. Assessments could be tagged to schedule program function while the importance is on useful, continuous assessment that comprises every program shareholders, not merely evaluation specialists. For continuous program appraisal, easy assessment approach might be sufficient. But, utilizing assessments methodologies, which are unambiguous, recognized and reasonable turns out to be vital when the risk of inherent decisions or program modifications is enhanced. Comprehending the judgment, analysis, and principles of assessment that are considered in this structure could contribute to long-term effects, like building choices on methodological decisions instead of baseless suppositions. (Witkin; Altschuld, 1995, p.46)

Prior scheduling regarding the direction of the assessment and the actions to be undertaken to reach the destination necessitates a well-guided evaluation. The objective and the issues to be addressed should be finalized by the assessment panel. In order to accomplish an excellent assessment with ease, it is imperative to put in place a concentrated assessment. The blueprint must summarize the issues you are examining, the procedure you will adhere to, what will be calculated, what are the procedures to be utilized, who will execute every action inclusive of scrutiny and explanation, after gathering the data what action will you take on it, and the manner of publishing it. To record the manner of soundness of execution of a program, process evaluations are employed; they are done at regular intervals right through the length of the program. (Bickman, 1987, p.31)

This category of assessment to observe the procedure of a program, inclusive of the performances that are happening, the persons organizing the ventures, and the target group intended. The question of efforts or resources to have been formulated or marshaled and actions are being executed as intended are appraised by process evaluations. They find out the strong points of a program, its limitations and spheres requiring Introduction to Program Evaluation. Outcome assessments are employed to gauge the effect of a program on the avowed short-term, intermediary, and long-term goals. This category of evaluation finds out the happenings as a result of the program, and to find out if the program has accomplished its intended goals. Outcome evaluations must be undertaken only at the period when the program is sufficiently ripe to deliver the desired results. (Thompson, 1980, p.95)

Information and realization regarding the manner of scrutinizing intricate programs have improved considerable in the previous two decades. The suitability of the evaluation plan is a major apprehension. The intricacy of the program behavior and discharging the requirements of the varied stakeholders must be addressed to by the evaluation design. Experimental, quasi-experimental and observational are the three categories of evaluation designs, which are generally known. Assessments employing experimental designs utilize arbitrary tasks to weigh against the impact of interference on one or more categories with effect on a corresponding category or categories that did not obtain the interference. For instance, an assessment panel might choose a cluster of identical institutions; subsequently arbitrarily allocate several institutions to get a deterrence program on tobacco-use and other institutions to act as control schools. Every school contains the equal possibility of being chosen as an intervention or control school. The likelihood of control and intervention schools differing in any manner are lessened that might impact variation in program results due to the arbitrary task. This gives you latitude to ascribe alteration in the result to your program. For instance, if the pupils in the intervention institutions deferred acquiring the habit of smoking more than the pupils in the control schools, you could ascribe the triumph to your program. (Cronbach, 1980, p.29)

Having a proper control group is difficult, or just unprincipled at times in a societal environment. An answer is to present the program to the control group following gathering of the assessment data. Using quasi-experimental design is one more alterative. Contrasting among non-equal categories is being made in this design and does not entail arbitrary tasks of intervention and control groups. In program assessment, observational designs are also employed. These comprise,...

...

Intermittent cross-sectional reviews might notify your assessment. The relevancies of case studies are often pertinent when the program is distinctive, when an on-going program is utilized in a special environment, at the time when you are evaluating an exclusive result, or during which an atmosphere is particularly volatile. (Calfee; Wittwer, & Meredith, 1998, p.17) (Discovery of societal distinctiveness can be consented to by case studies and the manner of effecting program execution as also the recognition of obstacles to and enablers of change. Think about the suitability and viability of lesser degree of traditional designs prior to selecting an experimental or quasi-experimental design for your assessment. Basing on your goals of the program and the proposed application for the assessment reports, these designs might be further appropriate for determining growth for accomplishing program objectives. Subsequently, these designs are cheaper and finish quickly. But, bear in mind that economizing on time and cost must not be the chief norm when choosing an evaluation design. It is pertinent to select a design that would determine the intended thing satisfying your instant and long-term objectives. Pre-planned program objectives as the norm for assessment thereby rendering the program responsible to previous prospects is employed by the goal-based assessment structure. Center for Quality Special Education, Michigan Department of Education, 1989, p.5)
In these circumstances assessment scheduling concentrates- on the actions, productivity and short-term, intermediary, and long-term results described in a program logic norm to guide measurement actions. A benefit of this assessment design is that the assessment panel has latitude and can become accustomed to evaluation plan if important modifications happen in the inputs and actions of the program. Advancement in the direction of the goals can be determined to record the accomplishment and establish responsibility in the nascent phases of your program. The design you choose impacts the stage of information collection, how you scrutinize the data, and the nature of inferences you can generate from your researches. A collective advent towards directing the evaluation gives a useful path to make sure the suitability and effectiveness of your evaluation structure. (Bickman, 1987, p.31)

The intention of your assessment should be expressed. These might be to develop the program, evaluate program efficacy or exhibit responsibility for resources. The objective will reveal the phase of progress of your program. You will perhaps desire to perform a procedure assessment to assist in developing the program. Associated with an established program, you would possibly desire to perform an outcome assessment to evaluate your program's usefulness and to establish that efficient exploitation of the resources is being made. Different spheres can be spotted by the program requiring development by the program evaluation. For instance, a smoking deterrence event might be successful, but it might not be drawing or holding on to a lot of members. A process assessment might provide an answer to this. (Rossi, et. al, 2003, p.64)

The efficacy of whether a program is moving ahead in the direction of preferred consequences can be gauged by program evaluation. For instance, evaluation can review if a tobacco deterrence event in school is equipping students with the awareness about the perils of tobacco, or a termination program is enhancing the span or fixation of the contestants' effort to give up smoking. Information regarding the efficacy of a program can be utilized to arrive at conclusions on the extension, modification or enlargement of the program. Program managers are in general responsible to the persons providing financial support and other stakeholders, inclusive of government officers and policy developers. Substantiation regarding the utilization of their funds should be done by the Program managers. It can be established by the outcome of the evaluation results to establish that a program is going on as desired, realizing its goals and feasible. (Brown, 1992, p.44)

The evaluation panel should even think about the target group utilizing the evaluation reports. Identification of the users and rendering opportunity to give input into the structure of the evaluation should be made. Help from the anticipated users would raise the possibility that they would utilize the evaluation reports. The reports of the evaluation users vary from the bigger sphere of the program stakeholders in a way that the data requirements of proposed users will establish the manner how you focus the assessment. Collection of information is made through a focused evaluation for a particular reason or usage. Deliberations are necessary regarding the issue of evaluation and its approval by the stakeholders. Having found out the users of the evaluation, you should decide the…

Sources Used in Documents:

Bibliography

Berk, RA; Rossi, PH. Thinking about Program Evaluation. Newbury Park,

CA: Sage Publications, 1990.

Bickman, L. Using Program Theory in Evaluation: New Directions for Program Evaluation. San Francisco: Jossey Bass, 1987.

Brown, Lloyd. A Framework for the Evaluation of Adult Literacy Programs.


Cite this Document:

"Program Evaluation Design" (2004, June 28) Retrieved April 23, 2024, from
https://www.paperdue.com/essay/program-evaluation-design-172783

"Program Evaluation Design" 28 June 2004. Web.23 April. 2024. <
https://www.paperdue.com/essay/program-evaluation-design-172783>

"Program Evaluation Design", 28 June 2004, Accessed.23 April. 2024,
https://www.paperdue.com/essay/program-evaluation-design-172783

Related Documents

C. Evaluation question(s) and aims. The primary question that will be addressed is to identify whether HCBS program is able to provide service to the target population. The evaluation questions will also be directed to the cost effectiveness of the program. The following evaluation questions are identified: 1. Is the program meet the budget requirements of the 1915 (b)? 2. Has the program generates cost saving? 3. Has the program has been able to

However, each chapter does include a paragraph explaining the intended purpose and outcome of the study. This study was not a traditional academic study and did not follow a standard academic format. It followed specific departmental government guidelines and stated the purpose, as directed by the General Assembly. The key research objectives focused on discovering the strengths and weaknesses of ever program or process that was examined in the

Program Evaluation Integrate data collection methods into the program evaluation plan. The data collection method is seeking to integrate qualitative and quantitative research together. It is developing a program that is effective in helping to support smoking cessation efforts. Qualitative research is used to provide background on the study and proven smoking cessation initiatives. For instance, this portion of the research revealed that any effective program will integrate therapy, support groups and

Program Evaluation The difference between action research and program evaluation might seem rather negligible on the surface, but a basic understanding of how action research differs from more traditional research methods also leads to great clarity in understanding the differences in evaluative methods. Action research is essentially a method of formative evaluation, where the evaluation is an ongoing process embedded in the experimental phases of the research; program evaluation is more

Program Evaluation of a University Theater Program The purpose of the graduate level theater program at Metropolitan University in Manhattan, New York, is to prepare students to make meaningful contributions to the theater industry. This program incorporates a multi-faceted approach to achieve this objective. It has very specific areas of concentration to assist in improving the quality of performances and shows to galvanize the general public to become interested in the

Boeing employs conventional methodology, which involves using multiple layers strategic partners for the Dreamliner project, and this has caused a fundamental delay in the project. To enhance reliability and validity of data analysis, data collected through literature survey are categorized into panel a, panel B, panel C, and panel D. The data collected from these panels are checked whether they are going to deliver the same results, and further