Blumenau, Jack
(2026).
Evaluation Academy Impact and Process Evaluation, 2025.
[Data Collection]. Colchester, Essex:
UK Data Service.
10.5255/UKDA-SN-858158
Data description (abstract)
This archive contains the data and code required to replicate the results in the Cabinet Office’s “Evaluation Academy 2025: Impact and Process Evaluation Final Report”.
The Evaluation Academy is a capability building course, developed by the Evaluation Task Force (ETF) in the Cabinet Office, which uses a train-the-trainer model to improve civil service analysts’ ability to teach others about evaluation methods.
Participants of the Academy were provided free, in-person training over the course of five days across a two-week period. The course material for the workshop was split across 10 modules: Introduction to evaluation; Theories of change; Scoping an evaluation; Process evaluations; Experimental designs; Quasi-experimental methods; Theory-based evaluations; Value for money evaluations; Planning and managing an evaluation; Communicating evidence.
The Evaluation Academy aims to ensure that participants will be able to deliver and explain the content of the course to a high level. The goal is for participants to be able to both understand and deliver the theoretical material on evaluation, as well as to be able to explain to participants how to apply the content in real-life evaluation scenarios.
The ETF conducted an impact evaluation of the first stage of the Evaluation Academy’s train-the-trainer model in July 2025. The impact evaluation employed a randomised controlled trial (RCT) which used a two-arm, individual-level waitlist design to investigate the impact of the Evaluation Academy on two primary outcomes – participants’ confidence in delivering evaluation training and the size of participants’ evaluation network – and one secondary outcome – participants’ knowledge of evaluation methods and processes.
The impact evaluation demonstrated large and significant effects of the Evaluation Academy on both primary outcomes. Compared to those in the control group, participants reported substantially higher levels of confidence in delivering evaluation training (an average increase of approximately 1-point on a 5-point scale) as well as substantial increases in the size of their cross-government evaluation networks (an average increase of 8 people).
By contrast, the impact evaluation found more limited evidence of impact on participants’ knowledge of evaluation methods and processes. Although treatment group participants performed better on average than the control group on a post-intervention set of evaluation knowledge questions, this difference was not statistically significant. The absence of large knowledge effects reflects the fact that civil servants participating in the first stage of the train-the-trainer model had reasonably high levels of pre-existing evaluation knowledge.
| Data creators: |
|
| Sponsors: |
NA
|
| Topic classification: |
Education
|
| Keywords: |
EDUCATION, EVALUATION, TEACHER TRAINING, DATA
|
| Date published: |
20 Mar 2026 12:58
|
| Last modified: |
20 Mar 2026 12:58
|
| Collection period: |
| Date from: | Date to: |
|---|
| 18 May 2025 | 31 July 2025 |
|
| Country: |
United Kingdom |
| Data collection method: |
The primary data for this study came from survey instruments designed by members of the Evaluation Task Force and distributed online. Baseline survey data which captured information relating to both the primary and secondary research questions for the impact evaluation was collected before randomisation into treatment and control groups, approximately 6 weeks before the intervention. The baseline survey was distributed through an email containing a link to a self-completed online survey hosted on Qualtrics. Endline survey data was collected from both treatment and control groups immediately after the intervention ended (i.e. after the trainer-the-trainer training). For the treatment group, endline survey data was collected via a self-completed online survey that participants were instructed to complete in-person during the final session of the Academy. Participants were briefed before beginning the survey that they should not collaborate with each other or use anything other than their own knowledge to complete the survey. The survey was conducted in silence, with an invigilator in the room to prevent participants from discussing the survey with one another. The control group participants were emailed the online survey to complete at the same time and were encouraged to complete it as soon as possible. The control group were sent two reminders to complete the endline survey and all control group responses were received within 28 days of the treatment-group data collection. The trial experienced very limited attrition, with endline data collected from 96.6% of participants (95.2% in the treatment group, and 98.2% in the control group). The target population for this evaluation was civil servants working in evaluation roles across UK government departments, arm’s-length bodies (ALBs), and non-departmental public bodies. To recruit participants, central evaluation teams within these organisations were each invited to nominate up to 10 individuals to take part in the Academy. Nominees were required to meet the following eligibility criteria: Have prior experience in delivering evaluations in government (c. 2 years experience) Not have previously participated in stage 1 of the Evaluation Academy Be able and willing to deliver at least two Evaluation Academy sessions in their home department each year following the training Be available to attend all sessions of the July 2025 Academy Nominations from departments were reviewed by a member of the Evaluation Task Force, and eligible individuals were enrolled in the study. In this review process, a small number of nominees were rejected based on the fact that they were unable to attend all five days of training. This sample formed the basis for randomisation into treatment and control groups as described in section 2.5 below. The final sample comprises 118 civil servants from across a broad set of departments and public bodies. |
| Observation unit: |
Individual |
| Kind of data: |
Numeric, Text |
| Type of data: |
Experimental data
, Other surveys |
| Resource language: |
English |
|
| Rights owners: |
| Name |
Affiliation |
ORCID (as URL) |
| [error in script] [error in script] |
Cabinet Office |
|
|
| Contact: |
|
| Notes on access: |
The Data Collection is available for download to users registered with the UK Data Service.
|
| Publisher: |
UK Data Service
|
| Last modified: |
20 Mar 2026 12:58
|
|
Edit item (login required)
 |
Edit Item |