One of the most important insights to come out of the accountability and complexity research fields over the past decade or so, is that as important as local feedback is to getting stuff done (whether this stuff is about improving services or reducing inequalities or enhancing democracy), those on the receiving end of feedback have to actually have the capacity to make improvements. Capacity can mean lots of things: power, skills, knowledge, resources, incentives and so on.
One of the problems with this is that in the aid sector, most funding does not really allow for adaptation. At the excellent LSHTM Centre for Evaluation symposium on Timely Evaluation that I went to last week, a number of presenters and participants pointed out that even where evaluation procedures allowed for rapid, short feedback loops, the “project cycle” – with its predetermined 3-5 year plan for processes and outcomes – hampered any real prospect of change. At best, we heard that reactive adaptations are possible but only to the extent that they don’t rock the boat. Just so long as they fit in the logframe, in other words. And one of the few examples given of where rapid feedback did lead to immediate adaptation came from routine monitoring of a routine service (the application of statistical process control to routine healthcare services). I.e. not a project.
I noticed in Duncan Green’s latest “vlog” (from Myanmar where he is visiting adaptive management projects) that he said there was lots of good stuff happening but because it is so fluid and non-linear it is “hard to report on”. But often the main reason we even need to report on projects is because they are… projects.
Pointing out the limitations of the logframe and the project cycle, and “planning”, is of course not new; and in any case the cycle has been pretty resistant to these critiques. But the discussions have got me thinking about a couple of things that might be worth looking into.
Firstly, I’ve not seen a lot of work on identifying the determinants of adaptation. In other words, do we know what it is that leads a service provider or a project manager to improve what they are doing? I made a few suggestions above but it would be good to see some rigorous work on identifying these determinants and on how best to flip the switches. How can we make it easier for providers and managers to do the right thing?
Secondly (and related), in the aid sector there has been a great deal of research on the impact of unconditional cash transfers to individuals and households, and on the impact of performance based funding for service providers. But I’m not aware of much knowledge on the use of unconditional cash transfers directly to service providers. Could providing flexible or contingency resources directly to frontline service providers increase their capacity to gather and act on feedback and make improvements to how they deliver services? Institutional or facility-level cash transfers?
The third thing that interests me right now – still in the development sector – is the idea of bringing real-time evaluation methods and skills into routine services. Most of the aid sector’s evaluation efforts are directed to evaluating aid funded projects, and many are also tied into research publication requirements. So they suffer from the same “project cycle” problems. But in lower income countries, many public services are actually provided without aid funding – or with minimal aid funding. They are not timebound projects although they are vulnerable to ebbs and flows in funding. I’d like to see more aid resources going to support evaluation and improvement capacity in those services rather than focusing on measuring aid impact.