The evaluation framework is a strategic plan that describes the approach to guide the data generation, analysis and interpretation of a research project. It replies key questions to support evaluators to carry out the evaluation (McDonald et al., 2001):
Why the evaluation is being conducted?
What will be done?
Who will do it?
When will it be done?
How evaluation findings will likely be used?
The evaluation framework should include a concise description of:
the program objectives and its goals
resources and scope of the evaluation
evaluation objectives and questions
outputs, outcomes, and measures
data sources and data collection methods
data analysis strategy
timelines and anticipated reporting dates
roles and responsibilities
strategy for disseminating results and developing recommendations
The evaluation framework is designed to help evaluators to operationalize the research questions in the context of the project and how these questions will be measured. Key outputs and outcomes must be identified for each question and measures (indicators) for each output and outcome.
As stated by Green and South (2006), “Having good, clear objectives in place will make the job of selecting indicators much easier”(p. 69).
The expected outcomes of a project need to be identified to guide the selection of the indicators of its success. To provide useful information(Patton, 1997), indicators for evaluation supported by ethical standards must be: relevant, valid, reliable, realistic and measurable.
The most relevant and practical indicators should be used. In case of research indicators that only measure something indirectly then the limitations must be acknowledged.
Examples of measures of RRI project activities:
The project's capacity to meet stakeholders' needs
The participation rate
Levels of stakeholders' satisfaction
The efficiency of resource used
The efficiency of intervention (e.g. training)
Examples of measures of RRI project effects:
Changes in stakeholders' behavior
Changes in community norms, policies, and practices
Changes in social innovation (e.g. quality of life, ..)
Changes in settings or environment around the project
What are the key components to describe how evaluation data will be generated?
A description of any participants
Participant recruitment strategy
Consent processes and how ethical concerns such as confidentiality are addressed
Data collection methods (e.g., interviews, focus groups, surveys)
Whether collected data will be qualitative, quantitative, or both
If data is extracted from an existing database, a description of the original database is to be provided
If document reviews are used, an overview of how this is done and the nature of the documents being reviewed
A description of any available baseline measures, if applicable
Whether a literature review will be carried out, and if so, a description of the focus of the review and search strategy
A description of how access to third party data will be negotiated, if the evaluation will involve a third party; as well as, description of any information-sharing agreements, if applicable
What are some potential sources of data?
Stakeholders - beneficiaries (e.g., project participants' access; community-members' feedback; artifacts produced by program participants or community members)
Project providers (e.g., associates, collaborators and organisations supporting the project)
Observers or people who are not part of the project (e.g., public in general, external communities)
What are some of the methods for gathering evaluation data? (Posavac and Carey, 1997)
Summaries of records/documents
Extractions from administrative datasets
What are the suggestions to describe who will do what (team), when (time line) and how (procedures)?
Communicate the evaluation plan
Carry out literature reviews/collect background information
Develop tools, instruments, and consent procedures
Collect, enter and analyze data
Write-up and disseminate results
When the evaluation needs to begin
Overview of tasks that need to be completed i
How to obtain the information needed for the evaluation
Due dates for feedback and reports
When the evaluation needs to end
Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada, Tri-Council Policy Statement: EthicalConduct for Research Involving Humans. (2010). Retrieved from http://www.pre.ethics.gc.ca/pdf/eng/tcps2/TCPS_2_FINAL_Web.pdf
Centers for Disease Control and Prevention[CDC].(1999). Framework for Program Evaluation in Public Health. Retrieved fromhttp://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm
Green, J. & South, J. (2006). Evaluation: Key Concepts for Public Health Practice(1stedition). Berkshire, England: Open University Press
Health Communication Unit (2007). Evaluating Health Promotion Programs Version 3.4. Health Communication Unit, Centre for Health Promotion, University of Toronto. Retrieved from http://www.thcu.ca/infoandresources/publications/EVALMaster.Workbook.v3.6.08.15.07.pdf
KU Work Group for Community Health and Development. (2011). Chapter 36, Section 5:Developing an Evaluation Plan. Hampton, C: University of Kansas. Retrieved from the Community Tool Box: http://ctb.ku.edu/en/tablecontents/sub_section_main_1352.aspx
Patton, M.Q. (1997). Utilization-Focused Evaluation (3rdEdition). Thousand Oaks, CA: Sage Publications
Posavac, E.J. & Carey, R.G. (1997).Program Evaluation: Methods and Case Studies (5th Edition).Boston: Prentice Hall.
W.K. Kellogg Foundation. (2004). Logic Model Development Guide. Retrieved from http://www.wkkf.org/~/media/20B6036478FA46C580577970AFC3600C.ashx
Alberta Health Services (2016). Evaluation Plan and Evaluation Framework.Retrieved from https://www.albertahealthservices.ca/assets/info/res/mhr/if-res-mhr-eval-resources-plan-framework.pdf