December 1, 2021

Tarran Macmillan

Reflections on Windows on Evaluation Matters 2021

Like many UKES members, I was pleased to see the second, ‘Windows on Evaluation Matters,’ appear in diaries in November, giving the opportunity to step outside of the day job and reflect (pun intended) on evaluation in practice.

In particular, I had reflections on the ‘complexity sciences: what evaluators need to know’ session, led by Chris Mowles, accompanied by Helen Wilkinson of Risk Solutions, and Adam Cowland of Department of Business, Energy and Industrial Strategy (BEIS). It’s a field with which I am well familiar, with previous roles on environmental evaluation and dealing with the complexity of applying social policy to natural systems.

Despite being well versed in the value of complexity sciences, I found the tripartite presentation gave me three key takeaways which I will be drawing on in my own practice, from each of the speakers:

  1. 1. Inflection in the view of a complex system: as evaluators, we have a voice and are not coming from a wholly objective viewpoint. This resonated as a government evaluator embedded in the organisation and undertaking evaluations of policies we enact. It raised questions on the blurry boundaries of internal dissemination and knowledge management and our role in managing this.

2. Complexity isn’t a method but a mindset:┬áis also a key learning, avoiding the pitfalls of focussing on methodology as an evaluator but applying complexity in the approach to evaluation. Sometimes effective evaluation is methodologically very simple, but undertaking that in complex spaces of multiple policies, stakeholders and end-users is where the complexity lies.

3. The practicalities of embedding complexity-aware evaluation in the government context: moving through rigid procurement systems, and policy development cycles but using the complexity mindset (from #2) to address these in practice. This can mean going back to previous decisions on long-running evaluations or giving consideration to how to build in iterative practice into evaluation design.

Those are just three takeaways from one of the windows – what were yours?