Reflections from Alison McKinley, UK Evaluation Society Member

WeBiBar Reflections

Prompted by discussion with fellow Members at the 21 January 2021 UK Evaluation Society WeBiBar: ‘Frustrations of life as an Evaluation Consultant – How I deal with them’


I thoroughly enjoyed my first UK Evaluation Society Members’ WeBiBar recently, despite distractions of a young household at dinner time in the background. I’ve since been able to reflect more quietly on the discussion about some of the frustrations of working as an independent evaluator.

Engaging programme staff in conducting and using evaluations

In adherence to Chatham House rules, I will only say the predominant theme that emerged from many participants was the difficulty of engaging programme staff in either conducting or using evaluations.

My own background in the international development sector has spanned both programmes and evaluation and the discussion really resonated from both perspectives. I wanted to try to articulate why this challenge is so common, and how evaluators might try to address it.

In my experience, programme staff are the technical subject experts. They ‘know’ best practice, be it through professional training or experience. While the programme cycle includes learning and course correction, more robust and complex evaluation approaches and aggregated learning from multiple programmes requires additional time and methodological expertise in the form of evaluators.

Different operational models exist to combine programmes and evaluation, but the discussion suggested to me that the delineation between evidence generation and its use goes much wider than the sector I come from. While this separation aligns with the principle of independence in evaluation, there are two unfortunate sequelae:

  1. 1. The due focus on accountability and cost-efficiency in (often) donor-funded evaluations, creates an ecosystem where evaluation is equated to audit, jeopardising other principles of trust, clarity and utility to programmes teams and, ultimately, beneficiaries.
  2. 2. The clamour for evidence-based practice often falls somewhere in between the programme and evaluation professionals, the ‘producers and consumers’ of evidence, creating a false dichotomy and operational challenges for getting evidence into programming.

Evidence into practice and Evidence through practice

Examining structural barriers to closer collaboration is a whole different subject, but the WeBiBar discussions recognised the result of evaluations becoming an externally stipulated tag-on to a programme’s end, when the opportunity for formative learning and course-correction is all but lost. Since strong programme design and evaluation both originate in a solid change theory, more prospective collaboration must conceivably benefit everyone.

So how might evaluators begin to push back against these operational constraints to move from pushing evidence into practice, to generating evidence through practice?

Key steps for successful evidence use

The discussion threw up some concrete steps we could all contribute in daily practice, a mixture of advocacy and action, for which opportunities will vary for internal and independent evaluators. I highlight some here using the Alliance for Useful Evidence’s key steps for successful evidence use:

  1. 1. Consult users on what evidence is needed: This is distinct from what commissioners need. If programme staff have not commissioned the evaluation they may not feel their questions are being answered or that the resulting report will impact their work. In-house evaluators could seek this engagement during programme design, collaboratively on the change theory to identify existing evidence, evaluation questions and data collection opportunities. While harder for independent evaluators bought in at the end of a programme, good ideas include acknowledgement of the distinction between commissioners and users’ needs; and engagement with programme staff at the earliest opportunity to understand their questions and ensure these are addressed in the evaluation.
  2. 2. Collaborate with users to create new evidence: Programme staff are a wealth of information essential to any evaluation. They can support access to key informants and collection of evaluation data. Yet the WeBiBar discussion highlighted frustration that it is often difficult to engage programme staff to maximise these opportunities. Addressing this step may be predicated on the first: if evaluation users (programme staff) feel their evidence needs are being met, they will more likely participate. Failing that, we may have to accept that programme staff cannot be expected to prioritise engagement with an evaluation that isn’t seen to impact or ease their workload. My only suggestion here is to acknowledge this reality and ensure we make it easy for them to contribute. (I like the Behvioural Insights Team’s EAST framework here: Easy – short and concise questions in a format of their choosing; Attractive – if not incentive to participate, then maybe more informal interactions during a coffee break; Social – indicate how other colleagues have been involved and how their expertise was valued; Timely – avoid heavy programme periods, or time your approach with programme deadlines that the evaluation might help them with.)
  3. 3. Developing targeted communications: Both commissioners and evaluators are responsible for clarifying the format of evaluation results at the commissioning stage but there is frustration at the number of comprehensive reports that appeared to be remain unread. Ultimately, a wider and more interested audience can be engaged by varying the dissemination format and platforms. To this end, one participant reported advocating for different dissemination formats in their proposed approach, while another simply provided more interactive elements such as videos, photos, podcasts and blogs. In-house evaluators might collaborate with communications colleagues on these products, but for independents, there is a danger that the (potentially significant) extra work may not be recompensed. We can all take baby steps though, depending on our technical skill-set. From simply advocating for more interactive content (do commissioners know that video is by far the most engaging medium?), to adding more images or even creating interactive content.
  4. 4. Increasing capabilities to use evidence and 5. Supporting evidence based working culture. We couldn’t claim to know how much of limited engagement with evaluations is due to limited capacity, but one practice shared at the WeBiBar that could support both capacity and working culture was to build some training about the purpose and approach of the evaluation into the proposed approach. Again, in-house evaluators might more easily incorporate this without having to advocate for its benefit or justify the likely extra cost.
  5. Benefits of taking these actions

These actions taken together may begin to redress some of the frustrations aired at the WeBiBar and support a closer, more aligned way of working between programmes and evaluation teams. Notable benefits of this include more robust, prospective methodologies and real-time learning, as well as boosting the principles of clarity and trust in evaluation.

Proceed with caution

But I would add a short cautionary note also to be mindful of maintaining the principle of independence and avoid becoming, or appearing to become, invested in demonstrating programme success.

I don’t mean to open the ‘independence debate’ here (although I look forward to the next WiBeBar which intends to do so!) but a small measure to mitigate this risk is ensuring commitment from all parties upfront to share learning from ‘negative’ findings, as well as positive ones, maintaining the evaluation’s integrity.

I acknowledge that closer collaboration between programme and evaluation teams is not always appropriate, depending on the evaluation’s purpose. Where it is appropriate however, evaluators may bring the outline of the puzzle, and programme staff the image, making it much easier for both sides to piece together. The suggestions here can begin to demonstrate this value-add and shift working culture, because systemic barriers won’t be overcome without pressure from both sides to change.

As one WeBiBar participant said, “anything evaluators do to support an evidence-based working culture with clients will be greatly appreciated by the evaluator after you.”

In conclusion

Ultimately, working together benefits the programme and evaluation in real time – not in the future or just for the donor, but for this programme, for these beneficiaries, now. This is ultimately the shared motivation for all of us and fertile ground for cooperation.

 

The Evaluators’ WeBiBar is open to UK Evaluation Society Members once a month. A convivial setting for exchange on topical matters in evaluation.