Substack

Monday, March 12, 2018

The implementation validity problem with RCTs

Randomised Control Trials (RCTs) are extensively used in development today. Funding decisions by multilaterals and other donors are swayed by evidence from RCTs. In fact, in certain rarefied confines of international development, RCTs have become the definition of evidence itself. 

A typical RCT is a small experimental pilot with the smallest sample size consistent with statistical requirements done under the supervision of some principal investigators and with the field support of smart and committed research and field assistants. This poses a problem.

How do we separate these two effects?
  1. The immense energies of reputed researchers and their committed and passionate young RAs to protect the integrity/fidelity of the experiment
  2. The effect on Sl No 1 (which would be absent in business as usual implementation) contributing to the experiment's effective implementation. 
In other words, how do we evaluate the treatment (or the innovation being proposed) in the business as usual environment?

An RCT typically establishes the efficacy of the treatment. But it does not tell much about its effectiveness, a measure of both the efficacy and implementation fidelity.

This assumes great significance since the same innovation or treatment is generally implemented through government systems, which are notoriously enfeebled. In fact, given the weak state capacity, trying out innovations whose efficacy has been established through RCTs is akin to pumping all kinds of exotically engineered liquids through pipes which leak everywhere.

It is also the reason why practitioners are lukewarm to many headline RCT findings which generate intense interest among academics and the global development talkshops.

Do we call this the implementation validity problem?

It is surprising that while papers and books have been written about the internal and external validity problems associated with RCTs, this arguably more important challenge, given the weak state capacity in all the implementation environments, hardly gets a mention. 

1 comment:

Anonymous said...

Thank you for highlighting this point. I remember a colleague, who was working on RCTS at that time, mentioning the same point in a discussion about external validity of RCTs about five years ago. The colleague called it the "RA effect on fidelity of implementation and the selection of RA is highly non-random by design. Perhaps replicability studies can be designed to address some of it?