Banning and avoiding words


The Washington Post reported on 15th December that the Trump administration has prohibited the use of seven words in budget requests by the Centers for Disease Control and Prevention, a leading US government scientific agency.  The words in question are:

  • Vulnerable
  • Entitlement
  • Diversity
  • Transgender
  • Fetus
  • Evidence-based
  • Science-based

The article argues that this follows a pattern, with the Department of Health and Human Services dropping the term “sex education” earlier in the year; and the terms in the list certainly seem like the sorts of terms that would raise the hackles of a highly conservative administration.

Subsequent responses, including a non-denial by the head of CDC, have stated that there are no banned words, without really explaining the origins of the story.

My hunch is there may not be an actual ban: bans require laws.  Even President Trump’s attempts to ban transgender people from serving in the US military by proclamation have been struck down by the courts (and ignored by the military).  And I suspect that if Trump’s gang wanted to ban actual research and programming in these areas, they would do so directly rather than just banning words – as they did with the extended Global Gag Rule at the start of his presidency.  I also think if it was a ban on words, the list would be quite a bit longer.

Given that it came up in the context of budget requests, it may be that the list of words represents a form of self-censorship: in other words, these are the terms that are best avoided if budget requests are going to get past lawmakers and the White House.

If this is correct, it doesn’t make the situation any more reassuring.  Anyone with experience with budget request processes, or tendering, or grant applications, will be more than familiar with the importance of tailoring language to the priorities – or whims, or obsessions – of decision-makers; and with the constant flux around what’s flavour of the month.  This is always irritating but can be benign.  Not in this case.  What is particularly chilling in this instance is the contents of the list of words.  The terms confirm an aversion among decision-makers in the US to some of the most important principles and concepts in public health; and the inclusion of the terms “transgender” and “fetus” only confirms that those leaders are picking out one of the most stigmatised and excluded populations, and entrenching the aversion to reproductive rights.

Whether the words are banned or “to be avoided”, the story will reinforce a chill that extends far beyond the CDC. Groups that are not already excluded from US funding because of the gag rule will be thinking carefully about not just their language but their programming.  Organisations working on HIV will remember how, under the George W. Bush administration, as well as contending with the gag rule, they had to take a pledge that compromised their ability to work with sex workers, another highly stigmatised and excluded group and one that plays an essential role in an effective AIDS response.  The same era witnessed promotion of evidence-free but ideology-heavy HIV prevention strategies.  Organisations will be scrubbing their websites and communications materials to make sure there are no red flags.  They will wonder (as I did some years ago) who these people are with unclear job titles, who are emailing or visiting to ask unusual questions.  They will be passing on the risk-aversion tactics to their partners – who should be in the driving seat.  Administrators in the CDC and other US agencies will be caught between a rock and hard place; between keeping doing what they know is right but doing so in a way that does not compromise them.

Effective programmes will probably be de-funded.

It’s no exaggeration to say that this will cost lives.  And the Trump Administration can do this without even needing to ban a thing.

Enabling adaptation


One of the most important insights to come out of the accountability and complexity research fields over the past decade or so, is that as important as local feedback is to getting stuff done (whether this stuff is about improving services or reducing inequalities or enhancing democracy), those on the receiving end of feedback have to actually have the capacity to make improvements.  Capacity can mean lots of things:  power, skills, knowledge, resources, incentives and so on.

One of the problems with this is that in the aid sector, most funding does not really allow for adaptation.  At the excellent LSHTM Centre for Evaluation symposium on Timely Evaluation that I went to last week, a number of presenters and participants pointed out that even where evaluation procedures allowed for rapid, short feedback loops, the “project cycle” – with its predetermined 3-5 year plan for processes and outcomes – hampered any real prospect of change.  At best, we heard that reactive adaptations are possible but only to the extent that they don’t rock the boat. Just so long as they fit in the logframe, in other words.  And one of the few examples given of where rapid feedback did lead to immediate adaptation came from routine monitoring of a routine service (the application of statistical process control to routine healthcare services). I.e. not a project.

I noticed in Duncan Green’s latest “vlog” (from Myanmar where he is visiting adaptive management projects) that he said there was lots of good stuff happening but because it is so fluid and non-linear it is “hard to report on”.  But often the main reason we even need to report on projects is because they are… projects.

Pointing out the limitations of the logframe and the project cycle, and “planning”, is of course not new; and in any case the cycle has been pretty resistant to these critiques.  But the discussions have got me thinking about a couple of things that might be worth looking into.

Firstly, I’ve not seen a lot of work on identifying the determinants of adaptation.  In other words, do we know what it is that leads a service provider or a project manager to improve what they are doing? I made a few suggestions above but it would be good to see some rigorous work on identifying these determinants and on how best to flip the switches. How can we make it easier for providers and managers to do the right thing?

Secondly (and related), in the aid sector there has been a great deal of research on the impact of unconditional cash transfers to individuals and households, and on the impact of performance based funding for service providers.  But I’m not aware of  much knowledge on the use of unconditional cash transfers directly to service providers.  Could providing flexible or contingency resources directly to frontline service providers increase their capacity to gather and act on feedback and make improvements to how they deliver services?  Institutional or facility-level cash transfers?

The third thing that interests me right now – still in the development sector – is the idea of bringing real-time evaluation methods and skills into routine services.  Most of the aid sector’s evaluation efforts are directed to evaluating aid funded projects, and many are also tied into research publication requirements.  So they suffer from the same “project cycle” problems.  But in lower income countries, many public services are actually provided without aid funding – or with minimal aid funding.  They are not timebound projects although they are vulnerable to ebbs and flows in funding.  I’d like to see more aid resources going to support evaluation and improvement capacity in those services rather than focusing on measuring aid impact.