Fairness in the Field: How Practitioners of Randomized Controlled Field Experiments Manage Randomized Resource Allocation

Sunday, June 26, 2016: 10:45 AM-12:15 PM
258 Dwinelle (Dwinelle Hall)
Margarita Rayzberg, Northwestern University, Evanston, IL
Scholars in development economics and political science are increasingly using randomized controlled field experiments to evaluate the impact of policy interventions and test social science theory in low-income countries. Although social scientists have been conducting field experiments in the context of United States funded technical aid abroad since at least the 1930s, the method has received renewed attention in the last two decades. Advocates of the method extol randomized controlled field experiments as the missing key to sound causal inference in social research and best way to find out “what really works” to alleviate poverty in low-income countries. Some have placed the knowledge produced by experimental evaluation techniques at the top of the hierarchy of evidence and widespread adoption of this method across private philanthropic and governmental international aid institutions as well as academic departments has been described as a new international development paradigm.

Critics of field experiments challenge the claim that field experiments should be the standard for research in international development. One way in which critics challenge the rapid adoption of randomized controlled field experiments is on ethical grounds. They point to unintended but predictable adverse consequences of some experimental designs (Barrett and Carter 2010), issues around informed consent in low literacy settings and where review boards are less institutionalized (Baele 2013; Alderman, Das, and Rao 2013; Barrett and Carter 2010), and the ethics of allocation by randomization especially if mechanisms for targeted intervention are available (Alderman et al. 2013; Conning and Kevane 2002).

Barrett and Carter (2010) contend that randomization is a morally contentious practice since it seems to be a fair procedure that produces unfair outcomes: “by explicitly refusing to exploit private information held by study participants, randomized interventions routinely treat individuals known not to need the intervention instead of those known to be in need, thereby predictably wasting scarce resources’ (Barrett & Carter 2010, p.11). It is not sufficient to simply avoid provoking “‘bad feeling’ among the participants when conducting a randomized trial’ (Banerjee & Duflo 2009b, p.166), Beale argues; advocates of the experimental approach in development economics to consider “the moral dimension of this methodological choice seriously” (Baele 2013).

These critiques have not slowed the implementation of such studies, but scholar-practitioners of field experiments have engaged with them to lay out their arguments for why they consider randomization more ethical than the alternatives. The philosophical debate about the ethics of randomized controlled field experiments as a whole takes place in academic articles, working papers and blog posts and takes the form of abstract negotiation about the costs and benefits of randomization. Ethical considerations of a particular project, if they take place at all, are likely to occur among the researchers during the project design phase. And the final arbitration about the ethics of a particular project is relegated to an Institutional Review Board (IRB) such that its approval is sufficient for research implementation. In the practical work of implementing field experiments, however, questions about fairness displace considerations about the regulatory ethics of randomization.

Because it is difficult to manufacture placebos of social interventions, practitioners of field experiments need randomization to appear fair in order to enroll and retain implementing partners and research participants. In the field, then, fairness becomes something constructed - discursively, materially and in relation to different shifting historical and cultural contexts. I first present the rhetorical arguments offered by advocates to justify the fairness of randomized resource allocation. I then examine two practices characterizing practitioners’ attempts to construct fairness: research design strategies and public randomization ceremonies. I conclude with a discussion of two common challenges control groups present to researchers’ constructions of fairness - resistance and confrontation - and how practitioners aim to mitigate them.

Rhetorically, economists make rationalizing and naturalizing statements about the fairness of field experiments, but these arguments have little practical traction for legitimating randomization to implementing partners and research participants. In the field, practitioners of field experiments manage fairness during both research design and treatment allocation phases of the experiment. During research design, researchers minimize the detection of unfairness by geographically or temporally separating the control and treatment groups using a number of material technologies. If this is not possible, researchers mitigate accusations of unfairness during the treatment allocation phase via “public randomization ceremonies”, which both depend on and further reinforce the problematic, but historically legitimated, conflation of “transparency” with “fairness”.

These “fairness management” strategies are rarely entirely successful. Participants’ willingness to remain in the study depends on both the perception of treatment allocation as fair and on clear communication that there is no guarantee participants will receive something for enrolling in the study. Randomized controlled field experiments do use informed consent documents, but it is difficult to assess their effectiveness in communicating the study protocols in languages not historically used to do experimental research and in settings where experimental logic is not a part of the cultural epistemic toolkit. Successful research participant enrollment depends, intentionally or not, on an entanglement between research and intervention, such that participants agree to be part of the study in part because they expect to receive something.

Control groups both resist randomized resource allocation and confront researchers about being excluded from treatment groups. When challenged, researchers mitigate accusations of unfairness by distancing themselves and their research activities from resource allocation activities, even as enrollment depends on the entanglement of the two. Achievement of transparency, then, does not always result in achievement of the perception of fairness. In deconstructing discourses of transparency, Strathern (2000) asks a critical question: ‘what does visibility conceal?’ This is a useful for investigating the transparency-as-fairness conflation and research-intervention entanglement in the context of field experiments. I suggest that centering the debate about the ethics of randomization on the examination of fairness relative to treatment and control groups in any given study distracts both study participants and analysts from how benefit circulates in the larger economy of randomized controlled field experiments, that is, among the funding institutions, researchers, implementing partners, and research participants of field experiments.