Blog from Dr Teresa Fitzpatrick Commissionership – Community of Practice 4

Appropriateness and proportionality in evaluations

The fourth session of the community of practice pilot series on ‘Commissionership’ focused on issues of appropriateness of design and proportionality in implementing methodologies within the contexts and parameters which govern evaluations.

Understanding of wider organisational context in which the evaluation sits
Building on previous sessions, such as ‘knowing your audience,’ the commissioner perspective underlined that an appropriate and proportionate evaluation is one which feeds information/evidence into the wider aims of the commissioner’s organisational strategy, whether that is to influence decision-makers, improve practice and/or demonstrate impact, either internally or with external stakeholders. Similarly, from an evaluator’s perspective, a good understanding of how the evaluation can contribute to the commissioner’s aims and
strategies, helps evaluators to select appropriate approaches and methodologies in the evaluation design, to deliver the appropriate information and data. This implies commissioners need to provide adequate information on organisational context, and evaluators need to interrogate commissioners on this context, to ensure they have a shared understanding.

Commissioner and evaluator will benefit from ‘frontloading’ time and effort to clarify evaluation objectives and priorities
It was advised that commissioner and evaluator should make time and effort to discuss and clarify objectives and priorities of the evaluation, and any known information/data gaps to inform final choice of approaches and methods. This not only creates an important shared understanding of the shape and materials available for the evaluation but drives the evaluation in the direction of robust design and contributes to building a productive working relationship between commissioner and evaluator.

Building-in flexibility as the evaluation develops to ensure methods remain appropriate to context
As needs become clearer to the commissioner and the evaluator, or where internal and or external contextual changes occur in the course of an evaluation, it is vital that commissioner and evaluator remain open and adaptable to re-design, or adjustment of methods. It is also important to state and communicate that changes have been made and why. Any changes should not compromise the robustness of the evaluation findings.

Time and budget parameters
Appropriateness and proportionality of evaluation design also needs to be negotiated within time-frame and budget parameters. Whilst this may seem obvious, it can become less so in the midst of operationalising an evaluation.

  • Whilst commissioners may be constrained by pre-set budgets, they can seek to re-balance the budget lines within the overall award, as needs dictate
  • Methods selected must be tailored to fit the time boundary of the evaluation. This implies a good understanding of the length of preparation and delivery time for method(s) selected
  • Resource decisions around data collection need to reflect the complexity/simplicity of the collection system chosen, including analysis and stakeholder involvement. Some key considerations for commissioners to consider include assessing whether pre-collected data is appropriate for the information needs of the given evaluation, and whether it is complete and has appropriate level of validity. If not, its use will not be cost-effective. Commissioners and evaluators need to be aware of the relative costs of different types of data collection. It is often assumed that participatory data collection is cheap – but it can be costly as it takes more time and people than other data collection methods.
  • Evaluators appreciate a clear overall budget in evaluation tenders, so they can plan what they can and cannot do. This is not always forthcoming from commissioners.

Method choice
A mix of qualitative and quantitative methods is very often selected and provide a good way of providing a robust evidence base. However, commissioners need to be aware of the resource intensity of many of these methods and the need for well-trained evaluators to execute them. Experimental methods are often deemed to provide a sound evidence base however they are not always appropriate and do not always fit within time and budget parameters; they tend to be costly. The notion that quantitative means robust, is too simplistic, and both commissioners and evaluators need to be wary of this assumption. An example was shared in the discussion where a randomized control trial had been abandoned for this reason. Implication here is that commissioner and evaluator need to consider methods in the round i.e. what data/information they will deliver and how this fits with time-frame and budget.

It is also not always the case that the commissioner knows which is the best methods to select. Refer to the earlier point above, where it is important for the commissioner and evaluator to discuss and assess what is needed from the outset, even if this means changing elements of the tender/bid.

Risk of overpromising
Evaluators really do need to build a good understanding of what information/data is required for the evaluation and what already exists and of what quality, otherwise there is a risk of overpromising. Overpromising raises expectations on all sides, and affects the credibility of the overall evaluation, and potentially the evaluators.

Some suggestions:

  • Evaluators could remind Commissioners, as necessary, that cutting costs in one area of the evaluation (e.g. not appropriately resourcing a method) could negatively impact on the evaluation and therefore render the exercise not value for money.
  • Commissioners could invite evaluators to critique methodology options proposed in tender or bid, and to demonstrate the appropriateness of the design to the intervention objectives to bring all stakeholders to the same understanding.
  • Commissioners are well advised to include plenty of organisational context and detail of strategies and data availability without being prescriptive on methodological preferences in the terms of reference.
  • Commissioners should remain closely involved in the progress of an evaluation throughout, and particularly where a re-design occurs.
  • As mentioned in previous sessions, it was felt that a shift in the commissioner-evaluator relationship to one of a collaborative partnership would be welcomed.

The session participants included commissioners and evaluators and some who had experience of both roles. The shared experience contributed to a sense of community of practice and the understanding of ‘commissionership’ is growing with each session.

Next session focuses on ‘Use of evaluation findings and recommendation – Towards better practice’. May 18th 1300 -1400 hrs BST (online) and offers another opportunity to share experiences and learn from one another.

Dr Teresa Fitzpatrick, FHEA/Research Associate/Impact and Evaluation/Centre for Enterprise/Faculty of Business and Law/Manchester Metropolitan University tel:0161-2476455
Putting knowledge to work at