Theory’s Gonna Getchya: Incentives and Academic Fraud

GreenEatsCrow

This post floats an argument I have been kicking about in the wake of Don Green‘s retraction of the LaCour & Green article in Science.  Before jumping ino that, however, I want to observe impressed I am with how decisively and rapidly Green responded to the apparent fraud in his project.  While we expect leaders to take responsibility and act appropriately when scandal breaks, they rarely do.  And, given where he found himself over the weekend, Don Green just modeled exceptional behavior. I have spoken with many people who feel that “he had no choice” and that “we would all act that way.”  I am not that sanguine.[1]  As I see it, we should all hope to conduct ourselves that well should we find ourselves in the crucible.[2]

That said, Green’s retraction request shines a light on an issue that the causal inference zealots do not, as far as I am aware, widely appreciate: they are at greater risk to the fraud that appears to have occurred in this case than those of us who rely on theory and observational data to draw our causal inferences.  Bear with me, and see whether you think I am onto something here.

Kicking Theory to the Curb

Let me begin with Green’s 1990s work, Pathologies of Rational Choice Theory and “Dirty Pool” <ungated PDF>.  I read “Dirty Pool” first, and while it contains a useful take away—fixed effects can serve as a useful benchmark when working with cross-sectional data—what jumped out at me was its apparent contempt for the role of theory in valid causal inference.  To be sure, it was more implied than boldly stated, but it was, in my view, unmistakable.

I then read Pathologies, which is an embarrassing straw man attack on rational choice theory, and updated my belief: this was a scholar who is definitely hostile to theory, and especially the rational choice variants.

What those works lacked was a positive alternative: if we are going to reduce the role of theory in causal inference, then what do we use in its place?[3]  Green, among others, would provide that answer.  Indeed, a nascent causal inference identification revolution was soon afoot, and Green became one of the leading zealots.

While I had little use for Green’s early zealotry, as a student of rebellions I am well aware of the positive role radical positions can play, and quickly came to appreciate (mostly due to colleagues at FSU) the value greater attention to design delivers.  Today it is clearly apparent that Green, his fellow zealots and their acolytes have brought remarkable benefits to scientific inquiry in our field.

That said, I sketch here how his disdain for rational choice theory absent “satisfactory” empirical evidence likely led him to under appreciate the elevated risk to data fraud his projects run.

The turn to greater attention to how designs might help us estimate causal inferences has been fantastic for science, and Green deserves a great deal of credit for playing a role in that shift in political science.  That said, his antipathy to theory in general, and rational choice theory in particular, sets the stage for an irony that I cannot resist pointing out.  If you have not yet guessed, ’tis a story about the incentive to cheat, principal–agent theories of human behavior and the risk of data fraud in academic research.

Consider a Distribution over That

Let’s embrace the world Green eschews and assume that the risk to data fraud varies across research projects.  What dimensions might we theoretically identify over which such risk might vary?  How about: the weight placed on the data?

Imagine a dimension describing the theory/data mix supporting an inference that ranges from 100% Theory, No Evidence on the left extreme value to 100% Evidence, No Theory on the right extreme value.  Now let the risk to data fraud be depicted on a vertical axis over some range that works for you.  What’s your belief about the shape of the curve depicting risk to data fraud as we move from the left to the right on the horizontal axis?

I confess, I don’t have a very precise belief about the curve beyond it’s basic shape: I am confident it rises monotonically.[4]   Why?

Scientific publication produces benefits and the probability of being caught cheating is less than one.  Holding the risk of being caught constant, as the importance of data to a project rises, the benefit to fudging the data should also rise.

“Fine,” you might say, “but what of principals and agents?”  Indeed.  Those of us who have collected data can attest that it generally involves hiring people to undertake much of the work.  Enter the PA problem (just theory here; no data up my sleeves).  All data collection efforts are exposed to the risk of fraudulent recording, and that risk rises as the number of people involved rises.

“Fine, fine,” you might say, “holding constant the number of people one hires, why would the PA problem be any worse for field experiment projects such as those headed by Don Green and the like?”

I have been involved in observational data collection efforts that one expects to be used for multiple research projects evaluating a variety of hypotheses implied by many theories.  Field experiment data tend to have a much more narrow purpose: to estimate a specific (set of) causal effect(s).  As such, the project’s success depends not upon completing and depositing the data, but generating a finding.  The value of a large field experiment project has a much more binary flavor than that of a large observational data project.  The former will and often can only be used “once”–the latter is often intended to provide for many future projects and general inquiry.  Both contribute to our understanding of politics, but the temptation to fudge is stronger in projects where the data are tailored in form to estimate the size of a very particular if not singular finding.

To summarize, my argument suggests that the PA problem intersects with the incentive to fudge, and they jointly make the risk to data fraud considerably higher in the work that Don Green does than the “theory and observational data” work that some zealots are so dismissive of.

What’s the Upshot?

At the end of the day we can trust that the social practice of science will work well, as Green’s retraction amply illustrates it can.

The import, then, is not for the community at large, but for each of us as we plan our multi-person, data collection projects.  I encourage folks to consider the risk to fraud in their projects.  I am confident that social science will continue to become more team based, making an issue Rick Wilson discussed increasingly important:

This case also raises the question of the role of LaCour’s co-author in monitoring the work… All of us who have co-authors trust what they have done. But at the same time, co-authors also serve as an important check on our work. I know that my co-authors constantly question what I have done and ask for additional tests to ensure that a finding is robust. I do the same when I see something produced by a co-author over which I had no direct involvement.

Regardless of whether my beliefs about the variance in risk to fraud are reasonable, I trust that few believe the risk to fraud is constant across all projects.   But I hope this post helps us begin to think more explicitly about risk to fraud, and about the construction of useful monitoring systems in our projects.[5]  There is, after all, a theoretical literature to which we can turn.

In closing, this post leaned on no data, much less an “identified” causal inference.  I hope we don’t need to wait for “gold-standard” field experiments before taking the issue seriously.

crowplate

Too soon?

@WilHMoo

[1] This may be because I study dissent, repression and human rights violations, and do not see those processes as driven by good v evil human beings, but banal human processes in which any randomly selected one of us are much more likely to participate, given appropriate circumstances, than we want to believe.

[2] I am not suggesting that Green is “an innocent victim” here, though that may well prove to be true.  From where I sit Green, and other causal inference zealots who downgrade the role of theory, are prone to rely too strongly upon design and “getting the same result from two experiments” when they could rely upon theory to provide constraints on expected results, and especially the size of effects.  But that’s an issue for another post.

[3] The importance of theory to valid causal inference is generally credited to Karl Popper, and a useful account can be found in Designing Social Inquiry, among many others.  Imre Lakatos provided an important generalization that emphasizes science as a practice of a community, rather than a solitary exercise pitting hypotheses and data, an issue that is poorly understood even among scientists that I discuss here.

[4] More precisely, I believe it continues to rise monotonically, not just a rise as Evidence goes from a zero percentage to a non-zero percentage.

[5] For the Ill Treatment and Torture project Courtenay Conrad and I decided that a mix of recruitment screening, costly signalling before joining the project, and consistent emphasis on the fact that their wages were paid by the American tax payer (via an NSF grant), and that the ability of future students to gain such research experience depended on their doing excellent work.  We relied strictly on rational choice theory to design this system.

About Will H. Moore

I am a political science professor who also contributes to Political Violence @ a Glance and sometimes to Mobilizing Ideas . Twitter: @WilHMoo
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

2 Responses to Theory’s Gonna Getchya: Incentives and Academic Fraud

  1. rkwrice says:

    I have long been puzzled by some in the “field experiment” movement. I certainly understand the value of estimating precise effect sizes when trying various “interventions.” There are lots of funders out there – ranging from NGOs to political parties — that want to know what their dollars will buy on a per unit basis. What I don’t understand is the atheoretical approach to interventions. When selecting an intervention from the finitely large set of possible interventions, the choice is guided by some model of the world. I wish the model were better articulated so that I don’t have to guess and reconstruct it myself. The beauty of experiments, whether in the lab or in the field, is that the experimenter tests the mechanisms that matter (at least to that scientist). The choice of which mechanisms to explore is a deliberate theoretical choice. Don Green is burdened with the same theoretical demands as the rest of us trying to practice good science.

  2. Irfan Nooruddin says:

    +1000 Thanks for saying this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s