January 18, 2022

Alison McKinley

What I learned about commissioning evaluations, once I stopped doing it

I have a lot of experience commissioning and managing learning and evaluation pieces, from international RCTs in digital health to collective learning facilitation in feminist movement building. When I decided on to branch out and undertake some independent evaluation work myself, I was delighted to find some really interesting opportunities to respond to, only to find I wasn’t quite as familiar with the process of applying for them as I’d thought.


I now wonder if perspective isn’t perhaps a better teacher than experience.

A previous UKES Webibar on reflective practice left me feeling pretty good about my use of reflection and learning in my career. But my switch from writing, to responding to, requests for evaluation proposals (RfP) made me realise something actually quite obvious: we can only ever reflect on what we’ve actually done. I could cogitate endlessly on fine-tuning my RfP-writing, but I’d missed half of the story before I started responding to them.

I was delighted to get my first independent piece of work. The scope was broad for the limited time available but my proposal outlined an inception meeting that would clarify and refine the learning questions as a first step. This meeting came and went, as did subsequent opportunities to reduce the scope which remained unfeasibly wide. My increasingly frequent and direct questions to narrow it were met with elusive responses which instead suggested interesting *new* avenues I might consider. I resorted to pointing out that this continued breadth could dilutefindingssignificantly and limit their use. Still nothing, until the first feedback on the draft report stated that the findings were quite high-level... Only then did the commissioner really engage in the discussion, after a lot of time and resources had been spent.

I railed in frustration (in my own echo chamber): How can you ask for something so light on detail and still expect a direct and insightful report to appear at the end?”. But then I thought how many have railed at me in the past? The truth is, evaluation and learning work are often commissioned precisely because ‘we don’t know what we don’t know’, especially in the complex environments in which we work.

Social impact is complex. Sometimes it’s just as hard for commissioners to articulate the ‘right’ question as it is for evaluators to answer it.

So maybe we should just agree to have a go at it together. Here’s some things we might all do differently:

More dialogue, less pre-determination

Funders and commissioners: it’s ok to say that the first item on the agenda is to talk about what the questions should be. Evaluators: it’s ok to ask for clarity, invite conversation and make suggestions. I don’t mean more writing (the 5 page-limited response to the 20 page RfPs is another topic). I mean actual dialogue, built into the commissioning process. Like an informal, pre-call conversation, or at least ensuring that the first step on contracting is an in-depth discussion of context, resources and learning goals. Formulaic commissioning guidelines and templates rarely include these and are part of the problem, unless we make them part of the solution.

More partnership, less power

If the final product is not as it was hoped, commissioners should be prepared to take some responsibility for this and reflect on if and how the commissioning process could have enabled a better outcome. Continuing the dialogue theme, let’s add an after-action review, or some other intentional, open and reflective opportunity, preferably during the work, to think about what worked and what could be done better in both commissioning and the on-going implementation. Let’s talk informally but frequently. Let’s practice what we preach and learn to do it better.

More flexible, less fluff

Evaluators cannot always see the internal processes that often constrain commissioning. I have been asked for example, to create milestones for payments that bear little relation to the work and serve only to support a potential audit trail down the line. Or to frame an evaluation into something that ‘this donor’ will be interested in, even while what we really want to know spans the space between the siloes of funding. Until we overcome these wider constraints though, could we be a bit more imaginative about gaming this imperfect system? How about ‘meetings as milestones, instead of interim reports-come-door-stops’? How about ‘data-partiesover decorative deliverables’? I’m stretching the alliteration now, but basically – let’s do more validation and course correction, and less formulaic work-flows and templates that simply don’t accommodate the complexity we work in.

CONCLUSION

All players in the evaluation marketplace – commissioners, evaluators and funders share similar frustrations from different vantage points. There is a lot we could all do better to piece together our respective parts of the puzzle. We just need a little more perspective.