Program Evaluation
Program evaluation refers to a methodic way of collecting, interpreting and use of information to answer queries on policies, projects, and programs, especially on their efficiency and effectiveness. People usually have the need to know whether or not the programs they fund and put other efforts into are working out as intended(McNamara, 2002). Some essential considerations in program evaluation include the cost of the program, ways through which it may be improved, whether or not it is feasible, and alternative options to it(Priest, 2001). An excellent approach to answering the three questions requires the joint effort of stakeholders and evaluators in program assessment. The procedure of evaluation is regarded as recent. It is notably relevant in the US in the 1960s related to Johnson and Kennedy administrations. Program evaluations could involve both quantitative and qualitative techniques of research(Priest, 2001). Persons who conduct assessment are from various backgrounds for instance economics, sociology, psychology, and social work.
An evaluation may be done in different phases in the lifetime of a program. Each stage brings questions which the evaluator attempts to answer. Different types of assessment depend on the stage. They investigate aspects such as the need for a program, evaluation of design if the program, assessment of the implementation of a program according to the plan, assessment of the outcome of the program or influence, and evaluation of the program cost and proficiency(Priest, 2001).
Assessment of needs explores the population the program targets to see whether the abstracted need is present or not in the people(Priest, 2001). It studies legitimacy of the problem and how to best solve it. It diagnoses the problem, measures the extent of it and the effects of the problem. For instance, in a housing problem targeting to cut homelessness may try to check the number of homeless people in an area and their demographics.
Assessment of program theory also referred to as a logic model, is a supposition that assumes the program design and how actions of the program should be conducted to get projected results. The ‘logic model' is mostly not explicit by people in charge of programs. It is assumed, and an evaluator has to find out from staff in a program on how such a program should attain its goals.
The implementation of a program needs to be assessed as well. Process analyses studies how such a program is being applied. The evaluation finds out on whether critical aspects to the success of a program are considered. It measures whether target populations are reached, people are reached by the foreseen services to them, and that staff is sufficiently competent. It is a continuous process to assess implementation(Priest, 2001). Prior measures may be used to evaluate on effective implementation of programs. That may be applied in sectors like education which have complex protocols.
Program outcomes are measured as well. They are measured in a specific way to see if the program has attained projected outcomes the results from it tell whether the program is useful at all or not. The measurement also helps clarify a person's concept of the program. The analysis contributes to seeing the impact of work on the people it serves. The information can assist in improving the program or other programs. Some sophisticated statistical techniques may be applied to assess cause and effect association between the program and several outcomes(Priest, 2001).
Efficiency assessment, finally, determines how efficient a program is. Evaluating professionals weigh the advantages and the price paid for the program, comparatively to draw judgments(McNamara, 2002).
Determining causation is assessing to see whether changes intended by the program are considered in an aimed population. An observed outcome may be unrelated to the program. It may be hard to determine causation, due to reasons such as self-selection bias.
Reliability, validity, and sensitivity in program evaluation are imperative. Reliability of approaches used in evaluation should be ensured since the more reliable a measure is then, the more the statistical relevance and credibility it has. Validity refers to the real extent of measurement. The stakeholders determine whatever is valid or not. Sensitivity is measured to assess whether the program impacts the problem it seeks to restore. Only measures that sufficiently are reliable, accurate and sensitive make credible program evaluation(McNamara, 2002).
Planning a program evaluation is an elaborate process that can be summarized in four parts. They include focusing the assessment, information collection, use of information and managing assessmen(McNamara, 2002).
Program assessment and research are different undertakings. Some professionals shy away from program evaluation as compared to research, but in truth, it is relatively easier and less time-consuming than research. Research tries to prove the effectiveness of practice, whereas evaluation attempts to increase the efficacy of practice (Priest, 2001)(Taylor-Powell, 1996).
In the context of the essay, a well-done program evaluation in health is necessary. It assists to enhance health procedures which are valuable, viable, and accurate. Here is how to do an assessment of a health program guided by a CDC structure. The guide is designed to assist program evaluation of public health. It is applicable beyond public health since it follows underlying program evaluation principles. It is, nonetheless, not an exact science and therefore is to be tweaked to fit any scenario in which it is utilized. The framework guides public health personnel to use program evaluation. The tool is feasible, non-prescriptive in nature, and is made to encapsulate critical essentials of program evaluation. It is imperative to follow the steps and standards of the framework so as to allow understanding on each scenario. The framework informs on how each one of those elements will enhance how program evaluation is understood and conducted. The aspect of assessment methods that involve evaluation personnel and program stakeholders, alike, is emphasized in health program evaluations. The usefulness of the framework by CDC is translated in public health as improving planning of health strategies, refining ongoing programs, and showing results of financial investments. The structure follows six linked steps. The stages include engagement of stakeholders, description of the problem, focusing on the design of evaluation, gathering of evidence, justification of conclusions, and ensuring usage and sharing learned lessons. Those steps are interdependent but do not have to be taken in a linear fashion along time. There is a set of 30 standards consolidated into utility, propriety, feasibility, and accuracy. They tell on whether the evaluation is likely to be effective. The framework is applied in the best way to fit match different health scenarios. The assessment can be integrated into routine practice. The staff could be put on a program to take part in evaluation by answering assessment questions, for instance, for a particular period(Centers for Disease Control and Prevention, 2016).
Finally, when asked on the word evaluation, only a few folks would visualize program evaluation as people meeting to deliberate on data. Evaluation is meant to produce collaboration towards better success from assessing factors in play for a project in an organization. Program evaluation has to be distinguished Program evaluation is zeroed into a program, but research targets a population. Program assessment seeks to improve an undertaking, while research tries to prove facts. An evaluation determines value, but research is not focused on that. Assessments monitor current progress, but research is done in retrospect. When conducted effectually, program evaluation is beneficial to any ongoing undertaking in any setting. When applied to health it produces health results that are valuable, practical and accurate.
References
Centers for Disease Control and Prevention. (2016). Framework for Program Evaluation in Public Health. Retrieved April 19, 2017, from http://www.cdc.gov/eval/framework/index.htm.
McNamara, C. (2002). Basic Guide to Program Evaluation. Retrieved April 19, 2017, from http://scholar.google.com.ezproxy.trident.edu:2048/scholar?hl=en&q=Guide+to+program+evaluation&btnG=&as_sdt=1%2C5&as_sdtp=
Priest, S. (2001). A program evaluation primer. The Journal of Experiential Education, 24(1), 34-40.
Taylor-Powell, E. S. (1996). Planning a program evaluation. Retrieved April 19, 2017, from http://learningstore.uwex.edu/assets/pdfs/G3658-1.PDF.
Academic levels
Skills
Paper formats
Urgency types
Assignment types
Prices that are easy on your wallet
Our experts are ready to do an excellent job starting at $14.99 per page
We at GrabMyEssay.com
work according to the General Data Protection Regulation (GDPR), which means you have the control over your personal data. All payment transactions go through a secure online payment system, thus your Billing information is not stored, saved or available to the Company in any way. Additionally, we guarantee confidentiality and anonymity all throughout your cooperation with our Company.