UK Evaluation Society

Window Reflections – Part One

First blog covering delegates’ reflections from the UK Evaluation Society ‘Windows on Evaluation Matters’ special online event held between 23rd and 27th November 2020


Window #1: ‘Learning To Be Different: Why Evaluation Matters’ with Dr Peter Taylor (Institute of Development Studies) and Catherine Cameron (Agulhas Applied Knowledge)

Reflections from Andrew Berry (Research and Evaluation)

Windows on Evaluation Matters started with a bang on Monday 23rd – Learning to be Different – and I found the session to be highly valuable in several ways.

Firstly, a lot of the evaluation challenges that Dr Peter Taylor highlighted really resonated my own experiences, such as getting learning and evidence to be used more consistently in decision making (I think a lot of us have been there!). I found other challenges to be more thought-provoking, particularly around power relations and how change is influenced by ‘rules of the game’. I was also pleased how Peter and Catherine’s experience in international development – something which I have no experience in – provided me with fresh perspectives.

Secondly, the session was a well-evidenced and critical self-assessment of the state of evaluation in 2020 which was uncomfortable to hear at times, but deliberately so. The session has made me want to dig much deeper into learning around ‘whose knowledge counts’ and how ‘I’ play a role in evaluations, recognising every individual’s personal baggage of opinions, prejudices and associated tribes is different. I see this as an actionable route to progressing my professional practice that has come directly out of the event.

Thirdly, I was inspired by the session’s focus on hope for an equitable and sustainable future – that it is ok to be optimistic without coming across as naïve – and both speakers reinforced how evaluation has a valuable role to play in achieving this, but we need to support each other as a professional community to be effective. Using the boiling pot analogy cited from the Africa Evidence Network, this session was a timely reminder ‘why evaluation matters’ in terms of evaluators’ professional role to regularly check the pot and make sure various outcomes are continuing to bubble up in order to turn the hope into reality.

Window #2: ‘What do Delphi and Medici bring to Evaluation?’ with Dr Wendy Asbeek Brusse (Ministry of Foreign Affairs, Netherlands) and David Drabble (Tavistock Institute)

Reflections from Steve Powell (Director, Causal Map Ltd)

The Delphi method always seemed to me like a dimly remembered dream of evaluation, enticing and maybe too good to be true. Dr Wendy Asbeek Brusse gave us a really engaging account of what happened when she actually tried it. Her task was to get a policy evaluation on anti-terrorism activities for the Dutch government. Real-life, high-stakes and controversial.

Why bother first of all systematically collecting evidence from key stakeholders and then collating it and submitting it to some evaluators’ sausage factory analysis algorithm, and then dreaming up recommendations on the basis of the findings, and having to check those back with the same experts? Why not just get the experts to tell you the answers and write the conclusions and recommendations for you? But then if everyone knew all the experts were going to agree, we wouldn’t need an evaluation in the first place, so presumably the Delphi trick is to find a way to get them all in a room and not let them out until the white smoke appears from the chimney. That’s my Wikipedia understanding of the Delphi method anyway.

Wendy frankly admitted that this was the part that was almost too hard. Her team succeeded in engaging the right experts and had no real problem inducing them to come up with long lists of themes and issues. They even agreed on a few common issues, which seemed promising, but many of these turned out to be the innocuous ones like “we need more M&E” and “everyone loves apple pie”. On the more contentious issues, the experts couldn’t be induced to find much common ground and Wendy’s team was left with a long list of insightful but diverse and often incompatible claims. In the end, the Ministry had to make its own synthesis of this list, at least partly defeating the object. Maybe, she said, the problem was the lack of sufficient consensus (even potentially) because of the diversity of the expert group, or because of the theme itself, or both. Or, she didn’t have enough money to pay them to stay in the same (virtual) room for long enough for the white smoke to emerge.

I was really impressed by Wendy’s unvarnished and straightforward story, the kind you can remember and learn from. Having heard it, the Delphi method remains for me a tantalising dream.

Window #3: ‘Who Needs to Know What in Evaluation: A Provocation’ with Professor Nick Tilley OBE FAcSS (University College London), Dr Alison Girdwood (British Council) and Adeline Sibanda (ADESIM Developments and Co-Chair of EvalPartners)

Reflections from Tarran Macmillan (Head of Evaluation, Defra)

Professor Nick Tilley’s presentation on who needs to know what provoked me to reflect more broadly on the role that evaluators have in understanding the stakeholder dynamics around what we are evaluating. Thinking from the perspective of one of the key ‘moments’ of evaluation: commissioning, it resonated with me how much of my commissioning role is understanding the stakeholder environment – both in developing a scope for an evaluation and thinking about where we may want or need findings to ‘land’. Traditionally, that might be something considered at the outset, using a stakeholder map or sphere of influence exercise to build engagement and dissemination strategies for the relevant stakeholders.

But thinking about the ‘who needs to know what’ question, I wondered if there was an additional dimension to this on not just who needs to know what, but ‘who needs to know what when’. I describe this as dynamics, as I’m aware that as organisations like Defra roll out new policies where there may be unanswered questions, the stakeholder group identified at the outset may itself change in composition and importance in hearing and putting evaluation findings into practice. Borrowing from complexity thinking, if we’re working in a complex, rapidly changing system, then the stakeholders within it, and the relationships between them are likely to not be static, and subject to change during an evaluation, in the same way that any of our outcomes we’re looking to measure will be. Perhaps, just in the way we might review evaluation questions, or return to update a theory of change over the course of an evaluation, we need to do the same with thinking about dissemination and who we need to engage to ensure our evaluations have impact – and how this group might change depending on ‘when’. Afterall, if an evaluation is undertaken in an organisation where no-one is around to hear it, did we really evaluate anything at all?

Continuing the learning

If you didn’t attend ‘Windows on Evaluation Matters’ but are interested in the topics that were covered, the UK Evaluation Society organises events throughout the year and you can keep up to date with these by viewing our events calendar – link here – and by following us on social media. Some of these events are for members-only so it is often more cost-effective to become a member of the Society – join us today – more info about member benefits here.