Let’s stop this. No really, let’s stop it. I’m not the first to say this. Indeed, it’s not the first time I’ve said it. But I would be ecstatic if I could be the last. Logframes, logic, and lolly. Harrumph.
Logframes help to define the logic of development programmes. If you do x and assumptions hold, then y will happen, and if further assumptions hold then z will happen.
The logic of good development, you won’t be surprised to hear from me, is systemic change. That means that the ‘output’ as in the product of your activities in a development programme should be a change in the way the system operates. This can be the introduction of, or improvement to, a supporting function or rule that positively impacts on the way a system works with respect to the poor. The consequence of that should be an improvement in the way the system performs which, in turn, results in poverty impacts – jobs, incomes, health improvements etc. I’ll defend that logic to the end.
There are three main issues, however, which make this a merry-go-round of mirth and misery – and I’m not laughing.
Accountability: Donors might subscribe to this logic but the new public management paradigm dictates that everyone needs to be accountable, even if accountability compromises on the quality of delivery. Donors need a way to ensure that implementers are doing certain things. For this they use the logframe. It sets upfront targets and payments are frequently dependent on either meeting targets or hitting delivery milestones which involve performing predefined tasks.
Evaluability: Given the above logic, try to set an indicator which could be used to ensure the programme is on track and could be defined up front. Go on, try it. While this is an issue, unlike the other inescapable factors, it can be addressed with a bit of creativity. It involves setting as outputs things like ‘proportion of a sector that is directly impacted by a change in a supporting function or rule facilitated by the programme’. So, if a sector was prevented from growing because of low quality inputs and a programme facilitated higher quality inputs, what proportion of farmers in the sector are able to access higher quality inputs and how does it change over time?
The logic is that a sector is large, determined by the impact level targets that they are trying to achieve, they can’t partner with everyone, they partner with one or some firms to pilot a business model. Only as this is mainstreamed in the sector does the proportion benefitting from the intervention change. The incentives there are right – it does not emphasise an undue focus on a particular type of firm, it does not result in target chasing at the expense of good development, it does not define exactly what the nature of the change is or how many of them there must be to impact on the sector. Link this up to well-defined outcome and impact figures and we’ve got solid logic. So long as there is provision to revise the targets and indicators based on emerging lessons then it seems like a sensible strategy, save for one final factor.
Incentives: A word I should probably have tattooed on my head. But they always matter. And here it is mainly the incentives of the implementer that we’re talking about. Implementers commonly have a great deal of input into the development and refinement of their logframe. How perverse a situation is that? You don’t ask students how well they think they should have to do to pass an exam! Understandably, contractors don’t want payment to be put at risk. Why should they? Given the system as it is then – contractors being made to put something at risk – the incentives are for them to make this entirely within their own control. But that’s not the way development works. That’s the way being a postman works, or running a production line in a factory. But it’s not suitable for trying to catalyse change in a complex adaptive system. Most contractors are not bad people – some are – and they want to deliver good development. But they are commercial organisations who cannot afford to put such large proportions of their core business model at such risk.
Once a logframe, and a system that responds to it, has been set up in this way it can have a seriously negative impact on development despite the best intentions of all involved. Things I’ve heard recently include “I know it’s right but why would we do that, it doesn’t count against our logframe” and “you’ve clearly exceeded all targets here in terms of impact and how it was achieved but you’ve only worked with three partners and not five so we’re going to have to mark you down”.
It’s ludicrous. The whole thing is ludicrous. Why can’t we do better? There are a number of options in this regard. Success fees? Better reputational transferral for appraisal of future tenders as an incentive to succeed? More subjective and closer relationships with donors to allow for judgement in the equation? All have their pros and cons and no one is perfect but all are better than what we’ve got now. Please, for the sake of good development, let’s do something.
* this article was originally posted on the Springfield Centre website
コメント