Developing Sophistry in a Dismal Science

Partha Chakraborty-

 

Development Economics needs to go far beyond RCT or mathematical models. It needs to learn from life, with all its beautiful imperfections or contradictions.

 

Sometime one weekend the Winter of 1995-96, I was found lying on my back on a couch at the Graduate Students’ Lounge of the Department of Economics at a Northeast Ivy League. Suddenly aware of caring but concerned gaze of a celebrated professor I rustled up, dropping pages of The Wall Street Journal onto the floor. There lay a pile of The Economist, Fortune, a week’s worth of The Journal and other newspapers.

 

No, it is not a story of a Friday night bacchanalia gone horrible. I was merely using lull of a weekend to catch up on cravings of “real world”, and I fell asleep. The esteemed professor led me to his corner office where he laid out his grand vision of how mathematical rigor would, in due course, save a ‘dismal’ science from itself. Real world is too messy and confounding. Our goal should be to use mathematical reasoning to peel away, deploy rigor to explain observations without caveat emptor, but, only in the context of (very) limited analysis. Economics is too fraught with grandiose statements, diluted commentary and contextualization that are merely stories, he offered. I remember him using the word “sophistry”, he warned my fetishization of “real” world would lead me to repeat centuries’ mistakes. Thoroughly chastened, I gave up on reading habits.

 

For about two months.

 

My affair with Economics was not an act of adultery to bedfellows of mathematics and statistics that I had spent five years studying at a premier Indian Institute with. It was always meant to be a group-act. Statistics, as I understand, is not a set of tools – it is a state of mind. You always not reject the contra-claim, designing an experiment may take more time than analyses (even, possibly, data collection), correlation does not mean causation, rejecting linear relationship does not mean no linkages,…the list goes on. Statistics never tells you why –you need context-appropriate knowledge to give you an understanding of the real world; you use tools of statistics to further that understanding.

 

As in Economics.

 

Professors Abhijit Banerjee, Esther Duflo and Michael Kremer won 2019 Nobel Prize in Economics for using Randomized Controlled Trials (RCT) to study issues in economic development. Developed in 1930’s by R A Fisher, RCTs have been widely used in various other fields, including medicine. Medical trials – carefully designed with double blinds and sometimes involving thousands – are the final arbiter in assessing efficacy of a drug or treatment, validating (or not) billions spent. The three Nobel Laureates have been primary drivers of RCT studies in development economics; J-PAL, a research organization where Banerjee and Duflo hold forth, claims to have conducted hundreds in Africa and South Asia, among others.

 

Professor Banerjee and I both started academic life at Indian Statistical Institute, Kolkata, per media accounts. He transitioned to economics at Presidency College within weeks, my affair with economics waited five more years till I entered graduate school. We both carry a belief in experiments; perhaps I stayed in Statistics long enough so I took to heart lessons on restraint. One needs to search long and hard for context before rendering policy recommendations – methodologies, no matter how cleverly implemented, go only thus far.

 

RCT, as applied to development economics – a la Banerjee, Duflo, et. al. – aims to look for God in small things, so to speak, and proudly so. It validates use of flip charts in education, e.g., or increase in nurse practitioners to complement trained doctors in primary health, use of mosquito nets reduce the chances of malaria, or surprise visits increase police efficiency. Sometimes they can be facetious, effect of reading glasses in learning alphabets for visually disadvantaged children is a famous example.

 

RCTs are not necessarily randomized in the true sense, experiment design depends critically on available opportunities as researchers freely admit. Estimates of efficacy, or even acceptance/rejection of a hypothesis, are highly ‘localized’, too many confounding factors exist for the analyses to be rendered completely unusable as a policy recommendation. Because blind studies are so costly to design and execute, hypotheses need be simple, thereby missing the point. Flip charts and supplementary teachers could be best done together, one can be more beneficial, or logistics may trounce both. Access to credit might be effective because of empowerment of women, or may cause it, or may be better /worse than other mechanisms for empowerment. Unlike medical studies, follow-up studies are almost rare to analyze confounding issues, we’ll never get to know. Moreover, RCT studies can never be double blind, thereby inadmissible to puritans of statistics, a problem biostatisticians have gone a long way to resolve. Three previous Nobel Laureates including Dr. Angus Deaton, who won for Development Economics himself, signed a Open Letter decrying the tendency to use RCT as the final arbiter in allocating development budgets.

 

Caveats fall in deaf ears amongst those not in the field. Economics is generally perceived as a big picture study of real world that always brings out dependencies and risk factors much more than that can be done with mathematical models, however complex. RCTs by design, sometimes by choice, look at the world in (really) small chunks in a way that limit their raison d’etre. Policymakers have been repeatedly hobbled by real world outcomes when they implemented recommendations. Economist Jean Dreze recently argued RCT researchers almost never have first-hand experience of the study subjects, impose abstract numbers-driven projects, at times amplify data idiosyncrasies to grand policy suggestions.

 

There was a time Economic Science was proudly dismal. It started with basic presumptions of self-interest and then worked forward. At a macro level we studied economies’ growth and other metrics. Development Economics brought together insights from other fields to problems of less developed economies, and Econometrics built quantitative tools. Statistics and economics were supposed to be kindred spirits – both search for direction of causality and highlight confidence level in these analyses. That was the primary reason why I changed field to economics myself, I wanted to have the language and themes to create coherent arguments for and against. Economics is about getting lost, then found, in dichotomies of a real world, in a good way. Mathematical models, and statistical data analyses, must be always – without fail – be accompanied by a laundry list of caveats, and policy recommendations should be tinged with humility. Such line of arguments may sound like speaking from both sides of the mouth, sophistry even, but it elicits an awareness apt for gravity of the subject.  I find simple one-liners around some RCT studies highly disingenuous as a result.

 

We need to swing the pendulum somewhat back towards contextualization. Economics as a discipline must be more comfortable with inherent contradictions and confusions therefrom – economics is, after all, a social science. We cannot just expect a level of ‘precision’ from a discipline whose primary subjects are humans with all their imperfections. Our mathematical models are mere concise representations of what we could take a thousand words or more, not new revelations. Our statistical methods find a band of acceptability beyond which we agree existing models are not working any more. Precision, or lack thereof, in and by itself, cannot cloud our understanding of big picture issues. When it relates to the development economics, we need to look beyond the trivial, RCT will not help do that.

 

In “Poor Economics”, Banerjee and Duflo start with a recount of “a comic book on Mother Teresa that the city then called Calcutta was so crowded that each person had only 10 square feet to live in”, and that “Abhijit knew where the poor lived. They lived in little ramshackle houses behind his home in Calcutta”. Not to dwell much in it, our family may have been not far from that level of despair when growing up. [Thankfully, it all is more than a generation away] A few lessons I learned the hard way includes insisting full implementation of laws on the book, on paper they tend to be virtuous. And that one never stops learning, adapting, working hard, keeping head/costs down, or looking for next opportunity if one is to grow out of the life they were born into, and I did.

 

I did not find these in an Economics text, I suspect such a generalized statement will never be fully mathematically modeled, nor will they be truly validated with RCT but they are bigger than all lessons in development economics I learned from legends. The only way Development Economics can remain relevant to subjects it is studying is to find a voice, a language of reason that bring these out. It goes far beyond mathematical tools available, and all limitations of statistical analyses. It learns from life.

 

Sophistry, you say? If that helps a beautiful but dismal field keep it relevant and potent, so be it.

 

[Partha Chakraborty, Ph.D., CFA is an entrepreneur in Blockchain and Wealth Management in US and India. Dr. Chakraborty spent two decades in all parts of the Investment Management value chain globally; he lives in Southern California with his family. All opinions are of the Author alone, and do not necessarily represent that of any organization he may be part of. The author alone is responsible for any error or omission.]