Evaluation fit for the future, UKES Round Table Exchange 1: understanding the context for evaluative practice

This blog builds on the Round Table Exchange (RTE) No 1: Understanding the context for evaluative practice held on 27th October 2021. The panellists were Dr Rick Davies, (Independent Evaluation Consultant); Dr Irene Guijt (Head of Evidence and Strategic Learning at Oxfam GB); Dr Martin Reynolds (Open University); and Ms Adeline Sibanda (Past President of the African Evaluation Association, member of the IOCE and EvalPartners). with moderation provided by Murray Saunders. UK Evaluation Society RTE team emphasises that we bear full editorial responsibility for the contents of this blog which draws, very broadly on the discussion at the RTE.


The UK Evaluation Society provided a short briefing paper to the panellists to set the scene. This was pulled together from some contemporary observations of the evaluation context, particularly in the global setting and from an overview of some of the critical commentaries arising from colleagues in various associations and societies across the world. In the ubiquitous fora for evaluation debate, there seems to be discernable movement toward repositioning evaluative thinking and practice to a more self-conscious and proactive role in contributing to a transformative shift in priorities. If not a ‘wind of change’ we can identify this as a strengthening breeze![1]

There are plenty of reasons for this, as the Ofir and Rugg piece to which we refer above, testifies. The change in the zeitgeist brought about by the pandemic, the shifting confidence in the hegemonic power of traditional forms of scientific certainty and credentialism, the nexus of potent and overwhelming problems associated with climate change, global imbalances in the distribution of resources and power, food and endemic poverty are beginning to come to a head (some would say, this realisation is actually long overdue).

It would be perverse for the evaluation community in the UK to ignore this growing clamour, so we want to participate and contribute to this conversation. There are many dimensions to this debate, but we can contribute by providing a thoughtful space in which the UK and wider (global?) evaluation community can deliberate and act on these matters. The following imperatives are not intended to be exhaustive but show how these reasons for thinking and repositioning evaluation are as important if not more important now than ever. The danger is, of course, to overreach what evaluative practice is able to achieve.  However,  we can contribute, and it is to this that UK Evaluation Society Round Tables aspire.

We argue that evaluative practice is going on in many environments, the occupants of which may not self-identify as evaluators but there are both situated and commonalities in the effective consideration of the evaluative dimension. We are able to identify, for example, what we could call the evaluative imperative in the following ways. At personal, organsational and national levels, the urge to ‘sense make’ in complex environments is real. Evaluative practice has the ‘promise’ of ‘telling us what is going on’. There is the social and political imperative of transparency in the distribution of resources in a legitimate and equitable manner. Evaluative practice promises to provide ‘resources for the public debate on transparency and equity’. There is a lively and important methodological debate concerning the difficulties and uncertainties in addressing ‘end points’ (attribution, impact and effects, causality, alignment and design) Evaluation aims to provide ‘authoritative and authentic evidence’ in a rapidly changing and uncertain environment. Prosaically but importantly, evaluations costs time and money so we are ‘moving away from evaluation as ritual, or as mere compliance toward evaluation outcomes which are useful, insightful, diagnostic and meaningful’

When we consider the potential role and positioning of evaluative practice in the broader, overarching context we identify above, we are living in what social theory calls conditions of chronic uncertainty.  Creating change can produce periods of normlessness and destructive instability. The kinds of shifts to which we refer are prompted by the confluence of several features (or at least the enormity of the challenge has become more mainstreamed) and the imperative for new ways of behaving can produce such instabilities as a transition is made across a boundary from one culture of practice to another. The UK Evaluation Society aspires to support the reconsideration of evaluative practice by constructing provisional stabilities as we seek creative solutions to problems produced by change.  We believe that our Round Table series can contribute to the production of these provisional stabilities by reflecting on how evaluative practice might take place enabling choices or decisions for future action. Thus we hope part of this agenda might be to adjust or inspire the development of bridging tools and thinking for planning and innovation within the evaluation sphere.

The panellists were in broad agreement on some of the characteristics of the contemporary context of evaluative practice worldwide. Put together, they constitute a somewhat critical or even negative picture of the dominant modes of evaluative practice. Much of this practice is unaffected by the kinds of debates within the Community of Practice of evaluators who are writing, blogging and commentating on possible alternatives.  There is then a serious dislocation between this kind of transformative analysis and how most of us are experiencing evaluative practice or even referring to evaluation. So, how can we summarise the critique of the contemporary scene?

  • The evaluand is often considered in isolation of wider systems, lacking a sensitive contextual or situational analysis which provides ‘connective tissue’ with more diverse or broader system dimensions. In other words, there is not enough social science analysis and ‘sense making’.
  •  Almost without exception, factors included in evaluation designs, privilege human beings’ interests in an unconscious way (akin to unconscious bias), irrespective of environmental factors which may impact other living ecosystems. The way this tendency may be viewed as inevitable precludes ‘other ways of seeing’ which are marginalised or even depicted as extreme. So, the effects of human activity on the environment (Anthropocene) point increasingly to the importance of the interaction of humans with the natural environment/bio-physical, on an equitable basis.  How might this be reflected in evaluative practice and the evaluation narrative in much the same way as issues of diversity is beginning to be embedded in evaluative designs?
  • The dominant paradigm for the provision and procurement of evaluative services is broadly but not exclusively functional, characterised by recognisable proformas or rigid frameworks, dominated by accountability despite a rhetorical commitment to ‘learning’, tends to be dominated by providers and thinkers from the ‘global north’, under emphasises local capacities, capabilities and knowledge in terms of control and power and tends to be backwards-looking. 
  • Evaluative practice occurs in many different contexts by people who do not always self-identify as ‘evaluators’ or who are familiar with the rich contemporary and publicly available knowledge base of evaluative thinking and practice.
  • There is an emphasis on the project or programme. Policy evaluations of a more general kind are relatively less and, as we note above, both foci are often undertaken in isolation of wider systems that might influence their effectiveness or focus.

Panellists were invited to share their views and discuss if changes need to be made in current evaluative practice to shift to an evaluative practice more fit for the future. This was, as they say, a big ask but the conversation is at least moving forward. We are not necessarily promulgating all the dimensions of the critique we present above and, to some extent the dimensions are nothing more than the kinds of disparities that characterise the world in general but played out in the evaluation domain. However, what might be some principles of procedure in evaluative practice worth considering as we navigate the future? Among the ideas, for those working in the evaluation field, which were circulating in the discussion at our first RTE were the following. (We are not privileging these ideas, but offer them as a contribution to the discussion.)

  • We should consider evolutionary theory and history as a useful countervailing perspective and adopt a more humble and realistic perspective on changes we might initiate through evaluations and provide a ‘standing take’ on the anthropocene bias which might be embedded in evaluative designs (in much the same way as we should have a standing take on possible unconscious gender, ethnic or geopolitical biases). In practice terms, the ambition of an evaluative process needs to be realistic from the outset and situated with regard to a range of major biases. Can there be more explicit requirements in general guidance provided for evaluative practice about assessing unconscious biases – both at individual level, and in terms of the context in which the evaluand is situated?
  • The connection between democracy and evaluation is about the tolerance and management of difference and access to ‘social goods’. In this context, diversity might be more readily accessible or at least just as worthy of consideration as complexity. Many operational implications of being ‘complexity informed’ (adaptability, responsiveness, unintended effects and causal chains, etc) might be enriched by a diversity imperative. In practice terms, evaluative processes would benefit from a more nuanced approach to consider and understand whose voices are being heard, whose voices are dominant in the analysis and judgements, and whether the range embraces appropriate diversity. While being mindful that not all forms of diversity are sustainable – diversity can be a double-edged sword.
  • We should make sure we place as much explanatory power on ‘narrative’ and situated experience as other types of methods. This is a plea to take seriously the full pallet of methodological choices and emphasise a much wider range of knowledge resources for evaluative practice including from indigenous knowledge resources and evaluation’s own history.  In practice terms, for example, the evaluative process would benefit from a greater proportion of time allocated upfront, to ensure a deeper, strategic understanding of the context of the evaluand.  Evaluative processes would benefit from systems thinking being ‘de rigueur’ ie an essential part of evaluative practice, not an add-on. M and E guidance needs to reflect this by ensuring it is mainstreamed.
  • In the same vein, we should acknowledge the broad base of explanatory knowledge resources which might be harnessed in evaluative practice, alongside the clusters of practices, whether acknowledged or not, which are unique to evaluative considerations. These might include, to give one example, the way in which evaluative outputs, as well as the evaluative process itself, are negotiated and brokered by the participating stakeholders. Stakeholders might include not only evaluators and commissioners but other interested parties. The latter might be those on whom interventions, programmes and policies have impacted directly.

Comments/further discussion are most welcome.  Please send to hello@evaluation.org.uk  marked Attn: RTE1

[1] See for example the piece by Zenda Ofir and Deborah Rugg in the American Journal of Evaluation Section on International Developments in Evaluation: Transforming Evaluation for Times of Global Transformation: https://journals.sagepub.com/doi/full/10.1177/1098214020979070