Why Research Doesn’t Always Lead to Successful Polices

By: Afreen Shafeeque

From education reform to health economics, social scientists have been finding exciting results in their research. But often, these ideas fail terribly when they are scaled and implemented. What do todays’ social scientists need to consider when conducting research? What is the process to convert findings into effective policies or programs? What has been going wrong? These are imperative questions to ask as the failure of our research and their subsequent policies reflect on the credibility of social science. Each time an allegedly research-based policy or program doesn’t perform well or ends up costing way more than it was expected to, the more likely politicians will reject ideas in the future that may actually benefit society. 

 

Since the era of neoclassical economics and the prominence of fields such as experimental economics, there has been a strong movement towards collecting credible data to formulate policies. Economists have focused on conducting lab experiments, natural experiments and pilot studies. Although generalizability is a well-understood concept it is commonly neglected. To give their ideas the best shot, researchers tend to conduct their research under the best-case conditions. For example, in a study to test the effect of an education technique, researchers may enlist the best teachers for their study. However, when implemented at large they may find their policy is ineffective as the population also includes subpar teachers. 

 

The issue with starting experiments under ideal circumstances is, that we realise too late that an idea is not feasible and have already invested money into it. Sometimes researchers do this with good intentions. Perhaps, their program or policy may in fact be rational and effective. However, just because an idea is good, doesn’t mean people will adhere to it and understand the rationale behind it as economists do. Implementation science is a field dedicated to this idea, of understanding how to systemically promote the integration of research findings. Whether the question is how to encourage people to take their medication in time or participate in community programs, findings from this field may be part of the answer 

 

In John A List’s book “The Voltage Effect”, he explains how ideas may succeed or fail to scale. He suggests policymakers often act too quickly on the findings of pilot studies. The book offers a helpful metaphor (chef versus ingredients) to evaluate whether community programs (such as universal basic income and education reform) should be scaled. Sometimes programs are successful because of the person running them (the chef). Whereas others are successful because of itself or it’s system (ingredients). These tend to be more scalable because it is easier to replicate, using the same “ingredients”. Whereas in the chef scenario it is impossible to replicate a human (although training individuals may help). List suggests that people don’t know when to quit. Sometimes policies work best in certain communities and don’t need to be scaled for the larger society. 

 

Another factor is the culture of academic research. A large focus on new ground-breaking ideas makes it so, that research that are replications which provide credible data, don’t get enough credit and are under-funded. This incentivizes researchers to pump out new theories and ideas instead of checking existing ones before there reach the policy stage. Also, journals and organisations are often not robust in vetting research. For example, it isn’t common practice for journals to ask for all the data used in academic articles to check their integrity.  

 

 

A critical consideration of the realistic impact of a policy must be prioritised over just the intentions of a policy. When policy and programs are being considered for larger scale application, researchers must ensure to conduct further research on a larger demographic before implementation. A more rigorous process of peer review is needed in academic publications. For example, journals should demand all the data used and use methods such as Benford’s law to detect data fraud. Unfortunately, even with the best intentions, seemingly good ideas fail. Society has pressing issues. Change is needed fast, but the process of science is slow and tedious. This leaves us with a difficult question- How do we balance the urgent needs of policy response while maintaining a reliable process of research and peer-review? 



 

Previous
Previous

Homo Podcastus

Next
Next

Interview with Dr. Ivan Boldyrev