July 26, 2019
Upping the Ex-Ante
Introducing a series of nine essays exploring the links between evaluation and frontier technologies
Back in 2013, I was deep into running regressions and a/b tests on a large(ish) dataset that I had to wrangle from a stats specialist. Used to pursuing mixed methods, I needed to get my hands on the numbers, so that I could confidently triangulate the nights away to ensure I could give my client the best possible evaluation results and recommendations. Despite doing so, I didn’t get re-hired to do the next iteration of the programme evaluation, simply put, because the whole initiative was moving over to a big data platform. A paradigm shift, for sure.
Since then, I have focused on upping the ante in my own practice, and extended my skillsets, comprehensions and approaches; looking at how frontier technologies are wonderfully disrupting our assumptions and practices.
Impetus for the series
Many of our distinguished colleagues have weighed in and produced some amazing and insightful work that guides us in these endeavours. For my part, I have experimented, reviewed and documented as much as I can make time to. I have collected some of my thoughts in a series of essays that attempt to add to the conversation around big data and frontier tech, through a broad lens of evaluation, as well as my specific interest areas.
‘Upping the ex ante’ is the result – or at least something of a way point along my journey that I wanted to share. I have tried to examine some of the fundamental notions around the philosophy, problems and assumptions, and the values and ethics of the fields, brought a brief case study to light, and tried to contextualise what I have found. I have also (briefly) exampled how the repurposing of a specific methodology has brought new dimensions to our toolkits.
Sharing my findings
How data is collected, found, processed and used has paramount relevance to evaluators. We need to be able to evaluate the technology down to the algorithmic level but also working from the other direction upwards in terms of what considerations need to be made (and what skills we might need to gain). The practice of evaluation is being influenced by our tech and data colleagues so it’s about trying to engage with the ethics and philosophy and what it practically means.
But there is enormous scope for taking a closer look through some of these lenses that we use within evaluation. Just as Emily Keddell’s analysis of a predictive risk model identified the risk of gendering child maltreatment (which I layout in the series) in thinking about big-data and frontier innovations, we can equally apply both comprehensive gender and human rights lenses. The foundation of human rights are autonomy, liberty and dignity. From a human rights-based perspective, we must ensure that we not only avoid impinging on these foundations but what we do must strive to enhance them.
I hope that the articles will provide some useful discussion and reference points for further explorations. You can find the articles at the following link (they are publicly accessible, no account needed).
Jo Kaybryn is an international development consultant, currently directing evaluation frameworks, evaluation quality assurance services and leading evaluations for UN agencies and INGOs.