New Brunswick Extra Brunswick Program Evaluation Proposal New Brunswick Extra-Mural Program Evaluation Proposal The New Brunswick EMP (Extra Mural Program) works under the aegis and supervision of RHAs (Regional Health Authorities) to bring health services to people's homes. It is open to residents regardless of age as long as they meet eligibility criteria...
Abstract In this tutorial essay, we are going to tell you everything you need to know about writing research proposals. This step-by-step tutorial will begin by defining what a research proposal is. It will describe the format for a research proposal. We include a template...
New Brunswick Extra Brunswick Program Evaluation Proposal New Brunswick Extra-Mural Program Evaluation Proposal The New Brunswick EMP (Extra Mural Program) works under the aegis and supervision of RHAs (Regional Health Authorities) to bring health services to people's homes. It is open to residents regardless of age as long as they meet eligibility criteria and focuses on client and family. The EMP looks to succeed by bringing together all parties, including health care providers, doctors, clients and patients' family in a coordinated manner.
The Extra-Mural Program attains its goal and mandate via provision of services that include acute, palliative, chronic, rehabilitative and supportive care services. All the EMP clients are able to access services that include medical, physiotherapy, occupational therapy, respiratory therapy, clinical dietetics, social work, pharmacy, speech language pathology, and nursing care, which are available on a 24/7 basis (New Nouveau Brunswick, n. d.). This evaluation proposal's goal is determination of program effectiveness.
The evaluation will look into program components that function optimally and ought to be replicated and expanded into other initiatives in the future. Program Evaluation aims at: Demonstrating effectiveness of program to financiers; Improving program effectiveness and implementation; Documenting program accomplishments; Managing limited resources more effectively; Justifying current funding for the program; and Supporting the necessity for increased funding levels.
Evaluation will serve to ensure that: I employ scientific bases for decision-making; services are result-oriented and socially equitable; service agencies' performance is effective; and provinces and provincial agencies are responsible for their respective services. This can be achieved by using clear plans, feedback systems and inclusive partnerships to enable constant improvements and learning. Evaluation effectiveness is facilitated by adherence to pre-set standards. Stakeholder engagement promotes involvement, input, and power-sharing of organizations and individuals investing in evaluation findings.
This aids me, as stakeholder engagement makes their unique views heard and ensures evaluation deals with program goals, functions and results. It enhances the likelihood of evaluation efficacy, improves its credibility, safeguards human subjects, fosters cultural competence, and aids in circumventing real/felt conflicts of interest. Background I have been chosen as the lead program evaluator for the New Brunswick EMP due to my expertise in this area. The team's program evaluators bring with them different expertise and are assigned evaluation areas based on them.
To accept this position is to commit to fulfilling all requisite obligations. The program evaluator's role is partnering with the organization and improving the EMP (Module 3: The Role of the Program Evaluator, n.d). I need to interact with concerned parties, including consultants, collaborators interested in program success, and other stakeholders to find means to plan and carry out evaluations. I have to come up with the evaluation plan, budget, and goals, and organize collection of necessary information, present results and hold discussions with consultants (U.S.
Department of Health and Human Services, 2005). Throughout these dealings, I must remember ethical standards (e.g. dealing considerately and fairly with people.) I am ethically and professionally bound to report results honestly and fully. Another important aspect is human subject protection. That is, evaluators must make sure subjects are unharmed, participation is voluntary and subject confidentiality and privacy is protected. One key obligation is clarifying what can and can't be done.
I must be honest about how far we can clearly judge project services quality and outcome utility, how far the project assumes responsibility for changes occurring, and limitations to generalizability of results to conditions in future (Managing the Evaluator: Roles, Responsibilities and Maintaining the Relationship, n.d). Considerable personal bias may arise if program evaluator advocates it, as making statements not in its favour will be hard. Also, managers aren't entrusted to assess their own programs, since they will have conflicting interests.
Individuals involved in program development may be biased, and have their own suggestions implemented. They may be subjective and look to ensure that their input is more valuable (Bowen, n. d.). Program Profile The EMP in New Brunswick serves in the communities and in the homes where the patients are and it does not discriminate on the basis of age. The program, under management of the Regional Health Authorities, offers quality home health care services to all eligible residents if their needs can safely be met in the community.
The program offers various options to the patients and these include: The option of foregoing time in hospital The choice to forego being admitted to a nursing home The opportunity to leave hospital earlier The option of care over extended durations The people who have long running disabilities can receive support at home Services to rehabilitate people Palliative care in the home Consolidation of services that patients need to ensure coordinated treatment The opportunity for the patient and their relatives to make decisions on treatment The EMP takes into account the fact that a patient and their relatives are different from other patients and thus have unique requirements to their situation.
Thus, the EMP strives to come up with a targeted plan of care to fulfil the particular needs of the patient (Extra- Mural Program, n. d.). Description of the Organization The organization, which is based in New Brunswick, through various programs, offer different services in the homes and communities in the area with professionalism. It not only helps to enhance the health of the patient, but also endeavours to prevent illness and help people to regain and maintain good health.
All this happens in the settings of the home and the community. In palliative care, the organization strives to provide for quality life to the satisfaction of the patient (Vitalite Health Network, 2011). In this paper, the focus will be on how the program can advance the priority of institutions using the distinctive and joint influence of the program. In the evaluation of the program in a sequential way, the focus is on the objectives of the program, and on the successful ongoing enhancements to the program.
All assessments have a goal of informing decisions. Thus, the points of examining a program are: Finding out how much progress has been made toward achieving the goals. Finding ways of enhancing the program's performance. The purpose of carrying out this evaluation is to find out the level the project has achieved the objectives set out in the plan strategically as well as the reports indicating the culpability. This evaluation is thus interested in confirming the results of EMP.
Thereby, it seeks to establish the use and suitability of the measures in place to inspect and assess such projects. In the area of health, the achievement of objectives is assessed and feedback given in the form of the results and the output. These results or outcomes are for an extended period and include the impact on the client, the families and the communities, in terms of health.
The output of a program is reported for a short period of time and can be limited to assess the reach of the program, such as how many people they serve in terms of patients and the communities. This proposal consists of the research and analysis to be carried out and presented in an outline; and the plan of work, the schedule for reports and things to be achieved.
The researchers and consultants should be included in the project so that it can be carried out effectively and successfully (Appalachian Regional Commission, 2012). Research Questions Though evaluation team members usually brainstorm and generate several possible questions, the following will be prioritized as key program aspects to be studied at this time.
For ascertaining if program implementation is proceeding as planned: Have appropriate personnel been recruited? Have they been trained properly? What challenges was the program tasked to tackle? How were these problems addressed? What were the program's long-term goals? Were they achieved? Is the program working? How does it relate to long-term goals? What are the characteristics of program beneficiaries? What contributed to program objective achievement? What measures accurately reveal the benefits the program has brought about and its impact on the community in the long-term? (Appalachian Regional Commission, 2012) For determining if program objectives are met: Has proper testing and treatment been conducted for more patients? Are they complying with treatment (not lost to care and follow-up)? The manager and team in charge of the program can better comprehend its structure by looking into its objectives, and modus operandi to improve the lives of clients.
Clear objectives and work procedure requirements and expectations will help people be more goal-oriented and focussed, and work towards achieving program objectives. There is a general call to be accountable for resources allocated to these programs as they utilize funds from the public and others. At the same time, those who utilise these services and who make time to participate in them (workers and volunteers), need to know if the programs are making an impact and are effective in their pursuit.
Thus the measurement of outcomes not only works to bring accountability into focus, but also to enable improvements to be made. Learning can be achieved through the assessment of outcomes so that what is found out can be fed back into the system and result in improvements. Findings can thus be incorporated in the activities of the program (Yung, et al., 2012). Ethical Considerations Before beginning to collect data, one very important issue will not be overlooked or over-stated: ethical considerations.
Strategies to protect the rights and dignity of those who participate in the evaluation program will be incorporated into the project design and how it is carried out. The evaluation can benefit the participants and others. The benefits arise from changes made at the program or agency level -- for instance, the evaluation can guide strategies useful in improving the program's impact, which would result in more positive outcomes for future or current participants. However, associated risks can arise from these benefits.
The evaluation will carefully consider any harm or risk, which may result from the evaluation, and will take steps to reduce it. Everyone who participates in the evaluation will do so willingly. In general, individuals participating in the program evaluation will have the right to: Choose if or not they will participate without penalties, withdraw at any time, even when they agreed to participate, and the right to refuse to complete any section of the evaluation, including declining to answer any question (Evaluation Ethics, 2009).
It may not be possible for the evaluations to be carried out anonymously, without gathering identifying information like the participant's name or social security number. Nevertheless, all information collected will be considered confidential and will not be shared with others. In some cases, throughout the evaluation, concerns about the safety of some participants may arise. The evaluation will be thoughtful of all participants' needs and ensure that care is taken to protect all of them as much as possible.
It will be important for the evaluation to be designed in such a way that it is of high quality and ethical (Evaluation Ethics, 2009). The implications to ensuring that the participants in the evaluation make informed consent include: Providing potential participants information regarding the evaluation, including the reasons for carrying out the evaluation, whatever will be required of them, how the information they provide will be used, and the time the evaluation will take.
Describing both potential benefits of engaging in the program and any likely risks, including foreseeable discomfort from his/her to participation. Sharing information through a language that all participants are able to understand by avoiding jargon and making translation services available, if need be. Allowing participants, a chance to ask questions regarding the evaluation (Evaluation Ethics, 2009). Implications for ethical considerations will ensure that the participants are able to feel at ease and offer all the information required from them, knowing that they will be in safe hands.
Guaranteeing confidentiality will see the study incorporate the following strategies: Gathering data from a private location where interviews cannot be overheard. Not discussing information regarding individual participants. Findings will generally be evaluated at an aggregate level. Keeping completed interviews within a secure place where they are not visible to other people. Shredding or securely disposing of filled evaluation materials if their usefulness is over (Evaluation Ethics, 2009). Stakeholders Every program has stakeholders. In New Brunswick, these are the people who are affected by the decisions and actions of the EMP.
Some of them include the clients or patients, those who provide services of care, the physicians, those charged with research, those in charge of the policy, the businesses in the community, and professional societies. All these groups have valid views and requirements of the program. One group that has valuable contribution to the program and whose views must be taken into account is the healthcare system consisting of clinics, hospitals and research institutions. They can be the providers of solutions to problems through research and experience.
Insurance bodies and employers are also stakeholders with an impact in these programs. Through the insurance coverage that people have, they are able to access various EMP services. The government, through the policies made, whether at the federal, the state or the local government, seeks to provide citizens with the best care services possible. Where the research focuses on the results patients show, those who are in a position to make decisions, enable people to have the information to choose the best care alternative (AHRQ, 2014).
These stakeholders were included in the study because they are the ones most likely to be impacted by the program. Engaging them will create a buy-in and assist in avoiding drawbacks not apparent in the program staff. After the stakeholders buy-in, it will be possible for them to move the program forward and remove hindrances to change. The stakeholders in this evaluation will be helpful in informing the process, understanding gathered information, and assisting the program to make the necessary changes.
In this way, stakeholders will increase the reliability of the evaluation efforts or the evaluation. Moreover, the engaged stakeholders will be helpful in disseminating the results of the evaluation. Evaluation Design Evidence on the evaluation outcome will be collected using the Quasi-logic model. Quasi-experimental study design will be used to measure causal hypothesis. Moreover, quasi-experimental design will establish a contrast group which is very similar to the treatment group with regard to the baseline (pre-test) attributes.
Comparison group will capture what might the results be, in case the study had not been carried out (null hypothesis). Hence, the program will be said to have resulted in some difference in results between the treatment and comparison groups. There are various techniques for developing valid comparison groups, however, this evaluation will use propensity score matching (PSM).
In PSM, an individual will not be matched to every single observable feature, but rather on their propensity score, which is the possibility that the individual is likely to participate in the intervention (predicted likelihood of participation) based on their observable characteristics. PSM thus equates treatment individuals/households to similar comparison individuals/households, and therefore, calculates average differences within the indicators of interest.
Hence, PSM will ensure that the average features of the treatment as well as the comparison groups are the same; this is considered sufficient in obtaining unbiased impact estimates. PSM will need data from the treatment group as well as the potential comparison group. The two samples must be bigger than the sample size that is proposed by the power calculations (i.e., calculations which show the sample size needed to identify the effects of an intervention) because observations beyond the region of common support are rejected.
Generally, oversampling will be greater in the potential comparison group compared to the treatment group. PSM will be conducted through the use of administrative records and surveys. Data in the treatment as well as comparison groups will be welcome from different data sets if: (1) they have data with same variables (i.e., the manner of definition is same); and (2) the data was collected in the same period (White & Sabarwal, 2014). Methodology Data will be collected through qualitative and quantitative methods.
There are various probable benefits associated with carefully developed mixed-method designs as acknowledged by some evaluators. The mixed method helps strengthen validity of results through the use of more than one technique of studying a phenomenon. The approach is known as triangulation and is often considered as the main strength of using mixed-methods approach. By combining the two methods, there will be improved instrumentation of all data gathering techniques and in increasing the understanding of findings by the evaluator.
Qualitative approaches will aim at addressing the 'how' and 'why' of the program. Qualitative questions will be open-ended. Qualitative methods will include focus groups, observations, document studies, group discussions and interviews. Quantitative methods, on the other hand, will address the 'what' of the program. The study will use a systematic standardised approach and methods like surveys and questionnaires. The two methods have their advantages and disadvantages.
Qualitative approaches will be good in furthering the exploration of the impacts and unintended consequences of the program (An overview of quantitative and qualitative data collection methods, n.d). Surveys/Questionnaires Surveys and questionnaires will be a good way of collecting large amount of data, offering a broad perspective. They will be administered electronically by mail. Mail and electronically administered surveys and questionnaires will reach more participants since they will be considerably cheap to administer. This way, information will be standardised and confidentiality maintained.
The questionnaires will make use of closed and open-ended questions (An overview of quantitative and qualitative data collection methods, n.d). Interviews The interviewer will seek to encourage free and open responses, although there will be trade-offs between complete coverage of the program functions and in-depth exploration of the limited questions. The interview setting will be a place where both the interviewer and the interviewee feel comfortable; the respondents should feel at ease so that they are able to respond to interview questions appropriately.
Just like the interview venue, the time set for the interview must be appropriate for the respondents. Also, the evaluator must arrive early at the venue to ensure that it's ready before the arrival of the respondents. In-depth interviews will also promote capturing respondents' perceptions. This will allow the evaluator to offer the meaning of the respondent's experience from the respondent's standpoint. In-depth interviews will be conducted with people within a small group (An overview of quantitative and qualitative data collection methods, n.d).
Focus Group Focus groups will be used in a gathering of 10 individuals who share similar characteristics useful to the evaluation. Focus groups will be useful in both formative and summative stages within an evaluation. These focus groups will provide answers that complement those received from questions within in-depth interviews (An overview of quantitative and qualitative data collection methods, n.d). Document Studies The study will evaluate the existing records of the program to provide insight into the setting that cannot be assessed in a different way.
Documents will be divided into 2 categories: public records, and personal documents. Public records will be collected from outside or within the program setting where the evaluation occurs. Such materials will be useful in better comprehending the participants in the project and making comparisons between groups / communities. Personal documents will assist the evaluator comprehend the way participants see the world and what they want to communicate. Unlike sources of qualitative data, gathering data from documents will be relatively invisible, and needs little cooperation from, persons in the setting under study.
Information from these documents will be useful in generating interview questions or identifying events that need to be observed (An overview of quantitative and qualitative data collection methods, n.d). Observations Observational techniques will be used to gather firsthand data on the program, its processes and behaviours that are to be evaluated. Observation method will offer the evaluator an opportunity for collecting data on various behaviours in order to gather a wide variety of interactions, and openly evaluate the program.
Through direct observation of operations and activities, the evaluator will be able to create a holistic view, which involves understanding the context in which the program operates. This will especially be important because it is not an event of interest, but the way the event fits into or is affected by, a number of events.
Through observational approach, the evaluator will be able to learn issues that the participants may not be aware of or they are not willing or unable to engage in candidly, during an interview or even within a focus group (An overview of quantitative and qualitative data collection methods, n.d). Budget The following is the budget for the project evaluation over a period of four months. It has been justified as follows: Staff Project Director: the person who oversees the project, the employees and the experts involved in the evaluation.
This person spends 25% of their time on the project. Base salary will be $50,000. Care Coordinator: this person will be spending all their time on the project and will be involved in the technical aspects such as issuing and overseeing the questionnaire, teaming up with the other consultants and director to come up with relevant questions, gathering data, and inspecting the progress towards achieving the goals of the project (American Academy of Paediatrics, n. d.). Base salary will be $35,000.
Evaluation Consultants: an evaluation consultant will be sought from a university in the area. This person will be tasked with helping with the construction of the evaluation, the selection of the methods of collecting data, measuring it, storing it and how to incorporate measures for evaluation in the daily activities. They will work in the evaluation team with other staff. They will also help to assess the data, analyse it and come up with reports (American Academy of paediatrics, n. d.).
The budget will be $3,000 Office Supplies and Equipment: the supplies that will be utilised include pens, papers, ink, cartridges for the printers and photocopying machines and other stationery used by the staff employed in the project. The budget will be $500. The equipment required for this project will include desktop computers, office space and a photocopying machine. The budget for these will be $12,000. The cost of agreements is allocated 10% as it is already negotiated as an indirect cost.
Timeline Evaluation of the program will be performed over a 4-month period, with each stage allocated a timeline of one month. Furthermore, the evaluation team will collect all of the final data on performance outcomes. At the evaluation's end, the evaluation team will have to present the Public Use Dataset and Final Evaluation Report. Following confirmation by evaluator regarding receipt of deliverables and their acceptability, the evaluation will come to a close (Garcia, 2003).
Illustrative Timeline for Evaluation Activities Evaluation Activities Timing of Activities for June -- September 2016 June July August September Evaluation planning Data collection Analysis/interpretation Report/dissemination Evaluation Planning This will involve determining why the evaluation is being conducted and the decision once the evaluation is over. Various forms of program evaluation exist. To understand what form of evaluation the program will use, the evaluation team will clearly define the purpose of the evaluation.
After considering the program purpose and goals, the evaluation will consider what to do with information gathered during the study. This will entail knowing who is going to use the information, whom the information will be communicated to, and the manner in which the information will be communicated to established stakeholders (Hands On Network: Evaluating Your Volunteer Program, n.d). Data Collection Different audiences will be used to provide the evaluator with valuable information. This step will involve considering who will be used to gather information.
In this study, information collected will be in both qualitative and quantitative forms. After defining the type of information that will be collected and the manner in which it applies to the project, the most effective data collection method will be identified. Data collection methods in the evaluation will include mailed questionnaires or survey, carrying out interviews with stakeholders (such as staff, clients, volunteers, community members), evaluating existing documentation, and carrying out focus groups amongst the stakeholders (Hands On Network: Evaluating Your Volunteer Program, n.d).
Analysis/interpretation will take place after data has been collected. Before beginning the process, the evaluation team will consider the evaluation goals to provide the foundation upon which the data will be sorted. The proper data analysing, sorting, and interpretation will be based on the data collected. Once the data has been tabulated, clustered, and sorted, the information will be put in perspective by interpreting the results.
After interpreting the results, the evaluator will begin writing an evaluation report on the program (Hands On Network: Evaluating Your Volunteer Program, n.d). Report/Dissemination After the data is analysed, the report on the results will be developed for the key stakeholders. It will be good practice to make sure that the evaluator reports the results to those stakeholders who provided information. This way, the participants will know that their contribution to the evaluation of the program was valued (Hands On Network: Evaluating Your Volunteer Program, n.d).
Findings Qualitative as well as quantitative techniques will be employed for data analysis. Simple frequency counts will be utilized in analysis of quantitative data. Content analysis and other qualitative techniques will be applied for reviewing patient records, training curriculum, and charts for patterns and themes. A meeting will be organized for interpretation of findings; stakeholders such as program manager and team members, administrators, and medical representatives and staff will be invited to attend it. Comparison of evaluation data with predefined program benchmarks will be performed.
Stakeholders as well as individuals engaged in the program's operations can justify evaluation findings and offer their recommendations accordingly, in this meeting. Multiple channels will be employed for dissemination of evaluation findings. Findings will also be presented at regular medical staff meetings as well as program staff meetings. Health commissioners of counties will be forwarded a brief report; a presentation will also be made for their benefit. The health department's periodic newsletter will contain an article on evaluation findings (Garcia, 2003).
Most of the time numeric data will be reported as percentages because that is what most readers are used to. However, the evaluator will be careful not to use percentages when there is a small data set. It will be the task of the evaluator to bring to the readers' attention what is essential. Therefore, the evaluation report will use the analysis plan pre-developed by the evaluation team to help propel the way the findings are summarized.
The data will be presented in a way that the reader can calculate the findings for themselves; however, the evaluator will not be compelled to show all possible findings. Often, the evaluator will adjust percentages to the nearest whole number in the report. The evaluator will have the job of ordering the content of the report in the best way to clarify the evaluation findings. The findings will combine information from several data collection sources in order to clarify relevant findings, regarding the program under evaluation.
Also, it will be the task of the evaluator to show the way information from various data sources revealed essential, actionable findings, or clarify why there may be differences in data collected through different sources (Bruner Foundation, n.d.). Discussion An evaluation can, and will hopefully, become a part of change processes in programs / organizations. Wherever employed, evaluations typically lead to incremental changes (provided any changes are attributable to them). Program evaluation is geared towards making improvements in the program at hand.
The evaluation will be informed by the purpose of the program. Some of the results from this project will include 1) showing how effective the program has been, 2) coming up with ways to make the program better, 3) ways of how adjustments can be made, 4) how well the program has been administered by those in charge, and 5) how the funding of the project is being utilised (U.S.
Department of Health and Human Services, 2005)? The evaluation findings can offer information regarding activities being carried out by the program and data illustrating the effects of activities of the program. Information regarding the current activities may help decision makers handle immediate issues about resource distribution, patterns of staffing, and providing services to individual clients or populations.
As well, data about the service outcomes of program can result in more rational decisions regarding expansion or continuation of the effective program or modification or elimination of the program, if it is less effective. Decisions about the development of a fresh program or selection of an alternate one can be made, not only based on the evaluation, but also the evaluative data, which makes a huge contribution (Lewis, Packard & Lewis, 2012). Limitations This evaluation will be employing interviewing, a labour-intensive method.
It is imperative for interview subjects to trust and be at ease when interacting with interviewers. Subjects may perceive some form of privacy invasion. The evaluation will also use questionnaire, which can be a challenge for some respondent categories, and can yield problematic information because of respondents' misunderstanding of questions. Questionnaire will fail when response categories cannot be foretold, as is the case with "how" and "why" questions. Further, focus group use is one limiting factor, as they don't essentially represent the population's dominant experience.
Meeting true focus group standards may be slightly challenging. Linguistic barriers are especially important in focus groups and hence, facilitators require fluency in the language spoken by participants. It is not a practical idea to try and combine participants speaking different languages (American Academy of Pediatrics, 2008). Since no perfect research design exists, results of the evaluation will be evaluated in the program's context by considering the potential weaknesses of the evaluation. The main drawback in the model used in this evaluation (i.e.
PSM) relies on matching people based on observable characteristics associated with a likelihood participating. Thus, if there are 'unobserved' characteristics affecting participation that change with time, the estimates will be biased and affect the observed results. An extra practical limitation of utilising PSM is that it needs assistance of a person with skills of using various statistical packages (White & Sabarwal, 2014) Recommendation When preparing a report of the evaluation, the most important component will be recommendations made. Professional judgment contributes significantly to this element.
Recommendations have to be appropriate and corroborated by evidence, bearing in mind the evaluation's context. Possible recommendations that will stem from this evaluation include: Findings will be used by program team and manager for refining program strategies. An evaluation's value lies in its applicability in informing decisions. Action is supported by an effective evaluation. Meaningful reporting of evaluations outlines outcomes, elucidates options, pinpoints program weaknesses and strengths, and offers information on key contextual elements impacting the program, as well as program improvements (Report and Recommendations, n.d).
Evaluation findings will aid in directing the program towards areas most critical to efficient service delivery. Headquarters and existing programs will be encouraged to initiate efforts for improving service quality and implementation. Such efforts need to include continuous practice assessments. To support these efforts, investigations will ensue with the primary intent of aiding programs with implementation and quality improvements. Research can: ascertain which families will most likely benefit from and participate in the services; determine program service aspects crucial to replicability; and determine threshold intensity.
The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.
Always verify citation format against your institution's current style guide.