The greatest trick the devil ever pulled was convincing the world he didn’t exist. (The Usual Suspects)
I’m not one of those people who puts aid donors on a par with the devil. Just as I think of welfare programmes and the NHS in the UK as mechanisms by which the government can use tax income to help to guarantee peoples’ rights, I see aid as a form of redistribution from the better off to the worse off. I couldn’t care less that economies, tax revenues and financial systems are bound by national borders, and I find the argument that governments in rich countries should only “look after their own” to be absurd and offensive.
However, like most people who have been involved in aid and development assistance for any amount of time, I’m well aware of the politics of aid, and I often wonder about what aid and assistance mean to the countries or people which are supposed to gain from them. This isn’t a post about the need of donors to have their logos prominently displayed on every document folder and product and computer and grain bag they’ve paid for – although that is probably still a conversation worth having. This post is more about the ownership of aid, and the ways in which the design of aid gets in peoples’ faces, possibly to the detriment of what it is trying to achieve.
A lot of attention has been paid to a new study of people who have been some way involved in aid, as recipients or as agencies or intermediaries. “Time to Listen: Hearing people on the Receiving End of International Aid” is based on 6000 interviews and investigates exactly this question. It has caused a fair amount of cringing and nodding in the aid community. The first question I asked myself when I heard about the study was this: how did the researchers define “aid recipients”. The second question: did the aid recipients they spoke to all know that they were aid recipients? The answer was: not necessarily. And an important finding of the book is that even if people know they’ve “received aid”, they don’t necessarily differentiate between sources of aid.
Quite a lot of my work involves reviewing and evaluating programmes. Different aspects of programmes – from the services being provided to communities, to the mechanisms by which money gets channelled towards those services, to the overarching national plans that these services are supposed to be part of. Whatever level I am working at, these reviews will generally involve some interaction with some of the people who are getting these services. My clients (NGOs, donors, UN agencies) are understandably keen to know what difference their advice, ideas or money made. But of course, when you speak to people in the community, while they often care passionately and speak eloquently about the conditions people live in, the progressions and the regressions, and the services that are available, the mechanisms by which these changes did or did not happen are of little concern to them. They care somewhat less about whether it was this NGO or that NGO that made it happen, or whether it was aid money or government money that paid for it. And while they might happen to know who paid for it, this fact is far less important than what went right or what went wrong.
This, in my view, is exactly how it should be, even if it means a reviewer can’t honestly report which changes, or how much change, can be attributed to whom.
This poses a problem because most agencies implementing social welfare or development programmes are desperate to be able to attribute progress to themselves. I’m often asked to find out from “aid recipients” what they thought of this particular project. This is not just a case of neediness – whether the money is coming from a government department, a foreign donor, a philanthropist or a publicly funded NGO, they all want to be able to show their results, and most wish to learn from and adapt what they are doing. Well-designed experimental evaluations are an important tool for properly assessing the efficacy of interventions and policies, and there is a lively ongoing debate about how evidence is constructed and what the “results agenda” in development work means. But while this debate is important, it is also important to recognise that most evaluations of development efforts do not require this level of rigour. It is not necessary to re-evaluate every single programme using a randomised control trial, because once good experimental evidence is available for a given intervention or policy, you should try to roll it out to as many people as possible: withholding it from a control group is no longer necessary or ethical. At this stage, evaluation should be more about how an intervention works in the real world, whether available evidence on efficacy was properly applied, and what the effects of doing this were on the entire population rather than only in an intervention group. This sort of evaluation can be called “programme evaluation”, to differentiate it from experimental evaluation.
But however sophisticated they are, most programme evaluations can never truthfully attribute effects to a given programme, since most effects are likely to be the result of a combination of factors. Any changes that can be identified certainly can’t really be attributed to who paid because money is fungible. So doesn’t it make more sense for programme evaluations to be designed in ways that assess the contribution of a combination inputs rather than the attribution of effects to each input; and that reflect the realities and concerns of the people at the receiving end? If so, shouldn’t the design of these evaluations focus on helping people to identify changes and to identify what could be done better, rather than on asking them if they remember which projects took place? And if this is the case, does it not also have implications for how programmes are designed and delivered in the first place? Make accountability grow; make aid vanish.