October 23, 2019
Introducing a reflective practice blog on big data and evaluation
In 2016, while on maternity leave, I started thinking about the term ‘big data’ – what it meant in general, and what it meant for an evaluation practitioner like me, one who is not a data scientist or statistician. I was also interested in trying out MOOCs (massive open online courses) as a route for professional development. Link to my first blog post here
The blog helps me identify what I already bring to the table that is valuable in the context of ‘big data’ evaluation, and where I need to develop new knowledge and skills. It is also a way of documenting a path to basic knowledge and skills that might be useful for others in a similar position – i.e. interested but time poor, inexperienced in data analytics, not connected to big data or computer science networks (read more here about my own experience). What are the alternatives to a Masters, PhD or some kind of intensive programming bootcamp?
My posts are by no means exhaustive, there are many important topics, ethics for example, that I haven’t touched on yet. I really recommend Jo Kaybryn’s essay series (link here) for a more comprehensive exploration of the links between evaluation and frontier technologies.
What stands out?
Looking back at three years of learning and reflecting two things stand out.
First, the rapid development in courses, books and commentary that specifically address the connection between big data and social science research and evaluation. Back in 2016 opportunities for learning were focused on analytics for business. Today, you can look to NCRM, MERL Tech and Sage Campus, for example, for more context specific information (read more about these here). There are also efforts to support general data literacy, for those who are not intending to become data science experts but want to be able to question data that is, more and more, informing decisions that affect our lives (read more here).
Second, the changing role of data in evaluation is part of a much wider trend. I first heard the term ‘methodological playground’ in an NCRM podcast by Professor Carey Jewitt in 2014 – read more here. It stuck with me as I noticed how creativity and blurring of traditional boundaries in evaluation methods is growing – big data, big quals, the intersection with design, creative research methods, storytelling, the role of community researchers.
For our practice as evaluators, this trend means we need to develop a high-level technical understanding of many different disciplines and a less technical understanding of, for example, meaning, ethics, knowing, and ownership, as we continue to reflect on our practice in relation to others.
Kerry McCarthy is an evaluation practitioner; she helps people and organisations make better decisions and understand their impact. You can find her blog here