Quality of Evidence Rubrics for Single Cases

At the heart of evaluation is the need for causal inference – this requires making a claim about a change. For any claim of change, it is important to consider the quality of evidence underpinning that claim.

The Society is grateful to Tom Aston and Marina Apgar for sharing this material. Please acknowledge them when using or quoting it.


When using theory and case-based approaches to evaluation, Tom Aston and Marina Apgar explore how an intervention has contributed to a change, directly or indirectly and often in relationship to other causal factors.

In Thomas Schwandt’s (2007) Dictionary of Qualitative Inquiry, for example, evidence is ‘information that has a bearing on determining the validity of a claim.’ Therefore, the focus is on the “probative value” of evidence – how much the evidence makes a particular explanation better or worse (Ribeiro, 2019).


In this document, Aston and Apgar provide guidance and a set of rubrics to assess the quality of evidence in relation to single cases related to a particular outcome.


Rubrics are a form of qualitative scale that include:


  • Criteria: the aspects of quality or performance that are of interest, e.g., timeliness.
  • Standards: the level of performance or quality for each criterion, e.g., Poor/adequate/good.
  • Descriptors: descriptions or examples of what each standard looks like for each criterion of the rubric (see Green, 2019; Aston, 2020a; King, 2023).


Aston and Apgar bring together numerous evidence assessment methods and tools for causal inference (Pawson, 2007; Puttick and Ludlow, 2013; DFID, 2014; Vaca, 2016; Steadman-Bryce, 2017; Bond, 2018; CASP, 2018; SURE, 2018; Ramalingam et al. 2019; JBI, 2020; Gough, 2021). Because this guidance is designed for single case explanations, it focuses mainly on how to strengthen internal validity within a particular case (i.e., the extent to which a piece of evidence supports a claim about cause and effect), and where cases are likely to primarily rely on qualitative data. These evidence rubrics should, therefore, be appropriate to support various theory-based or case-based methods (such as Contribution Analysis or Realist Evaluation).


While this document does not offer full guidance for assessing external validity it does include one rubric on transferability which is more appropriate when using methods that account for context within causal claims. It also does not offer guidance on assessing the quality of an evidence base as a whole at portfolio level or across a body or research.


Tom Aston and Marina Apgar (November 2023)