Rubbish evaluations in the aid sector; consultants


2013-04-10 14.00.37

I missed this, from Rajiv Shah the Administrator of USAID:

In many instances our project evaluations have been commissioned by the same organizations that implement them. Often what passes for evaluation follows a 2-2-2 model: two contractors spending two weeks abroad conducting two-dozen interviews. For about $30,000 they produce a report no one needs and no one reads.

And the results they claim often have little grounding in fact: one of our implementing partners claimed over a quarter-of-a-million people benefitted from $14,000 spent rehabilitating an Iraqi morgue.

This has led to a relationship between implementing partners and evaluators akin to that between investment banks and rating agencies. Just like investors couldn’t tell the difference between triple-A investments and junk, taxpayers can’t tell the difference between a development breakthrough and projects and subprime development.

Well, OUCH.  I think he generalises somewhat.  Some of the jobs I’ve done which Shah would probably categorise as 2-2-2 stemmed, I think, from a genuine desire on the part of the client to get me to have a look as an outsider and to give my views on how things are running and how they can be improved. And this can lead to improvements.  It might be more accurate to call this sort of evaluation a “review”, an incremental learning opportunity if conducted properly. Don’t expect them to demonstrate or prove impact. They won’t, and if they claim they will, someone is lying. But when a programme is putting in place policies or services that are already backed up by good evidence of efficacy and efficiency, assessing whether it has done so effectively is important. Implementation is no walk in the park.

But have I ever ended up doing a 2-2-2 like the one Shah describes (bar the cost element)? Or even a 1-1-1? Yes. More than once.  I’ve even commissioned them.  And despite the very dim view most people in the aid world (and without it) seem to take of independents, I care, as do most of the independents I know and work with.  We don’t want to do meaningless work.  If it looks like an assignment is bullshit many of us will turn it down but it’s not easy to tell up front.  A lot of the time we don’t really know if it is going to go anywhere or not until we are in too deep.

Sometimes what looked like a very exciting, meaningful piece of work, turns out to be nothing more than one bureaucrat’s means of winning an argument with another one down the hallway.

But this is no call for sympathy. Most of us do pretty well and manage to balance out the nonsense with meaningful work.

Anyway, OUCH nonetheless.  Because there’s a lot of truth in what Shah says. There’s a lot of it about. And he is right to want to fix it.

Still, given that 2-2-2s are, in my view, not universally or inevitably poor and meaningless I often wonder about coming up with some sort of checklist or charter to help ensure they are worthwhile.  Here’s a few thoughts. If you read between the lines you might get a sense of some of the challenges I’ve faced.

  • Will the client commit to findings being publicly discussed/shared?
  • Will the client commit to providing all original datasets to the consultant for analysis (rather than summaries)?
  • Will the client accept the consultants’ duty to maintain confidentiality of all informants?
  • Does the client acknowledge that the consultant is responsible for establishing findings and recommendations, and accept that it may not edit or change these but that it may issue a management response?
  • Will the consultant have the right to publish, independently, the methods and findings? (This is standard for university based consultants but not for independents).
  • Was the evaluation designed at the start of the programme rather than at the end when the managers realised an evaluation report had to be produced?
  • What is the mechanism for interpreting and implementing the findings or recommendations? If it is about experiential learning and incremental improvements, will the implementing team have the scope to make any recommended changes? If the review or evaluation is at the end of a major project, who is the “audience” for the findings?
  • Do the terms of reference make any suggestion that the exercise should describe the programme’s “impact”?  If so, question it, and if that doesn’t get you anywhere, run a mile.
  • Has the client pre-empted the findings? HINT: if the terms of reference say something like “demonstrate the impact of…” or “describe best practices developed by…”, they may need to be taken down a peg or two before you start.

I’ll try to add. If you have any thoughts please fire away in the comments


4 thoughts on “Rubbish evaluations in the aid sector; consultants

  1. I like your list. Number 3 is interesting. I heard of an NGO recently that wanted consultants to provide transcripts of all the interviews. Apart from issues around workload (the NGO didn’t seem willing to pay for the extra time this would take), this raises the confidentiality issue. Even if names were removed, it wouldn’t take much to work out who said what. I think the NGO’s reasoning was about checking consultants actually did the work, maybe also about using the data for other things, or checking the analysis (I haven’t discussed it with them to know) – so maybe well intentioned. I’d be interested to know if you’ve come across other NGOs that require transcripts.

  2. Hi
    No-one has demanded them from me yet. I think there should be a mechanism for having some sort of review especially when primary data is being collected. I reckon ideally it should be done by a third party or ethical review board… but as you say this has cost implications. Ultimately though I think there must be agreement in advance of the research taking place methods should be clearly outlined and agreed. Again this probably means a it of additional work but it is important for quality and ethical reasons I reckon.

  3. Pingback: The empowering potential of evaluation practices | WhyDev

  4. Pingback: The Empowering Potential of Evaluation Practices | Veni, Vidi, Mutati

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s