This is the third in a series of guest posts by Nate Monroe.
I’ve been under water for a few weeks, and that’s left me five weeks behind in my new blogging enterprise. So, in an effort to get back on track I’m here with an omnibus blog post that covers a bunch of the highlights from those weeks. But you should definitely check out Will’s coverage of Week 2 (Reality, Perception & Human Knowledge), Week 3 (What is Science?), Week 4 (Why Theorize?), and Week 5 (Want Ye Some Building Blocks for Theorizing?), as he gives a much more comprehensive take on what we’re trying to do with each class session.
The focus of this week was ontology, but much of the class discussion focused on what it means to “know something” and, once we know what knowing is, how do we know we know? (I know, I know…) To start with, I argued that we should think of “knowing something” in probabilistic terms. That is, a person “knows” something—about the way some process works, some cause and effect relationship exists, some “fact” in the world is—when it’s no longer worth their time and effort getting new information to increase the probability that the “thing” (e.g. process, cause/effect, fact, etc.) works the way they think it works.
Once we had settled on this as a starting point, I argued that a key aspect of knowing, in a scientific sense, has to do with how we can collectively know something. There are two key points about the need for collective knowledge that I emphasized. First, we often have to act together to do things (whether it be building bridges or creating constitutions), and because of this it is very useful if we can agree about when we “know something” collectively. Second, by making knowledge creation and development a collective enterprise, we’ll be able to get around some limitations in biased perception at the individual level.
One consequence of this discussion, and perhaps my most heavily emphasized point, is that consumers of research should be very wary of people who say we should all “know” something simply because they know it through their expertise. As I explained to the class, this does not mean we cannot use “expertise” to build better theories, create better measures, conceive better designs, and so on. I simply warned them that they should be wary of academic snake-oil salesmen who will inevitably cite their own experience and expertise as the reason we should “know” that one theory or another, one implication or another, or one measure or another is in fact “true” or “false.”
As Will already outlined in his post, Week 3 tackled the question “what is science?” One of the things the students picked up from the readings and seemed unsettled by is that there is no consensus on what science is either across disciplines or within political science. To their chagrin, I explained to them that I didn’t have a magical answer to this either (and if I did, they should be as suspicious of it as anything the snake-oil salesman above told them). Instead I gave them my own approximate definition along with some basic principles that I argue lead to better knowledge creation for a community of people working on the same basic set of problems. My definition was this: science is a process that allows us to, as efficiently as possible, create useful explanations that are able to serve as the foundation for even better explanations in the future. One of my students, recalling the lessons of the week prior, suggested that an amendment might be in order; a clause that reminds us of the need to “create a process that guards against the limitations of natural human biases and perception.” I accepted the friendly amendment and congratulated her on her sharp thinking.
The key thing I wanted to impart this week was a two prong motivation for thinking of science as a community endeavor. Prong one focuses on the point made in the friendly amendment: the basic ontological problem of individuals trying to “know things” on their own and being constantly fooled by their perceptual limits. Prong two is the more interesting one, I think: for me, the heart of science is the ability to constantly replace our explanations with better explanations.
In the rest of the class session, I tried to foreshadow the way that this would come into play as we work through the more tangible choices that come up throughout the research design process. Consider the need for explicit theory, for example. Articulating a carefully, rigorously, logically constructed theory isn’t particularly important for an individual doing “science” alone on a desert island, but it becomes crucial within a community of scholars all working to explain a particular phenomenon or related set of phenomena by building off of each other’s work. In that environment, explicitly stated theory becomes the anchor point from which better and better knowledge can emerge. Similarly, norms about sharing data and rewards for successful replications are the building blocks of successful science as I have defined it (for the students in the class). In short, thinking about science as a set of rules and norms designed to solve a collective action problem gives us a basis for choosing which version of “the scientific method” we adhere to (or cobble together).
One theme that began in week 3 but carried us through week 4 (where the purpose was to explicitly discuss the goal of theorizing) was why deductive validity is so prized (by at least some cross-section of social scientists) in the development of theory. I presented the students with this “goal of theory”: to offer useful causal explanations for stylized facts (or phenomena or puzzles or relationships). So what “qualifies” as a possible explanation of a stylized fact? In other words, how do know that an explanation (i.e. a theory) actually “explains” the pattern we seek to explain? I argue that that deductive validity (either by formal or intersubjective means) is the only way for an explanation to quality; the only way it gets to “compete” to be the currently most useful theory.
For example, say we notice that constitutional systems with bicameral legislatures tend to be slower to change policy (radical, I know). At minimum, any explanation (read: theory) that purports to explain this pattern must be able to plausibly argue that this pattern is a deductively logical expectation of the theory. Say I proposed this explanation: “Dolphins cry to the sun and my foot hurts sometimes. Also, tacos.” That explanation doesn’t qualify, because I cannot derive from it the expectation that bicameral legislatures are slower to change policy. Yes, I realize that my explanation is the extreme end of silly. But consider that it shares a key property with many, many other “explanations” that sound relevant: none of them demonstrably imply the stylized fact they purport to explain. This argument may seem tautological, obvious, or both, but a quick spin through any social science journal will reveal plenty of violations of this basic premise, where the purported explanations for facts do not logically produce these facts as implications.
Next, I spent some time warning students about what theory is not. Diagrams are not theories (though many good theories have accompanying diagrams to help readers follow). Hypotheses are not theories (though many theories produce interesting and testable hypotheses). Literature reviews are not theories (though most theories have assumptions and concepts that are grounded in and motivated by previous literature). Experiments are not theories (rather experiments are analogies to the hypotheses that theories produce). Lists of variables are not theories (I have nothing to add to that). Of course I did my best to offer lots of caveats and nuance, but the reality is that in this class session I was just trying to lay the groundwork for good thought habits. I know they will need to be reminded of these things at least a few dozen more times before it occurs to them on their own; I’ll probably account for the first dozen reminders over the rest of this semester.
We finished that week’s discussion talking about the article “Fuck Nuance,” which I had never read until Will suggested it for the class. First of all, its abstract, which reads in full “Seriously, fuck it,” is by far the best abstract I’ve ever read. But it would be easy to read that and think that the article is devoid of useful content or is a spoof; that couldn’t be more wrong. Instead, its key lesson is perhaps the most sharply insightful critique of one of the most ubiquitous occurrences in political science: the demand for more plausibility, detail, breadth, depth… “nuance” in our theories. As I warned the students, it’ll happen during every workshop Q&A, in every set of written comments, on everything they will ever present.
My main point of emphasis in the discussion was to warn them to not be nuance-seekers. Theories are abstractions; we know for sure that they leave things out. Indeed, they would be useless if they didn’t. Only secondarily did I prepare them for the inevitable day when they’ll be asked to provide more nuance in some theory that they themselves had developed. Rest easy that I didn’t actually tell them to say “fuck nuance” in a Q&A, but I did give them the more cautious suggestion that they  show the nuance-seeker that they’re capable of thinking about what a theory that incorporates the requested nuance would look like and  gently remind the nuance-seeker that their theory, as is the case with all theories, is meant to be an abstraction from reality.
In the fifth week we finally started talking about how to “do” this profession. Here, we focused on “the building blocks of theory,” which broadly consisted of talking about concepts and assumptions. I admit that I had a surprisingly hard time preparing for this discussion, for two reasons.
First, I had never thought as carefully as I should have about concept development, so I was thinking through some of this process for the first time. Luckily, Will is really, really good at it. So, across a couple of conversations with him, I came up with a three-step prescription for helping the students think about concept development:
- Develop a denotative definition
- Define the dimensional space of the concept
- Decide whether each dimension of the concept contains ordinal or continuous values.
On the first step, I argued for appealing to first principles, and staying away from connotative definitions. For the second and third steps, we talked through the basic mechanics of having conceptual values that are both collectively exhaustive and mutually exclusive, and the properties of different categorization types. Along the way, I repeatedly hammered one point: concepts are not measures (I figuratively slapped the hands of almost every student for saying “variable” or some other related term). Concepts are the starting point for developing measures, and the beginning of theory of measurement, but they are not measures.
Second, in the run up to class, I had a very strange realization: I don’t know what an assumption is. Or, to put it more precisely, I don’t have a rule for what differentiates it from other parts of theory, like concepts and actors. And, in fact, I wasn’t/am not sure if there is any difference. Are all parts of theory assumptions or do assumptions in fact have their own character? I asked some of my colleagues, including Will, and no obvious answer emerged. Perhaps one of you readers can weigh in in the comments.
I confessed this to the students right off, but as to not send them spinning into theoretical space, I gave them a rough attempt at a definition. An assumption, I told them, is “any part of theory that “identifies which parameters—including actors and concepts—matter, and how they relate to one another in the theory.” This explanation is far from satisfying; at best, I’m willing to say I think it’s probably partially correct.
Another assumption-related stumbling block of slightly less troubling proportion was my attempt to confront “unstated assumptions.” That is: in every theory—including formal theories—some assumptions are left unstated. For example, two game theorists constructing proofs for the same model might not produce identical sets of “assumptions,” because they might justify the solution concepts differently or because they might have different beliefs about what constitutes common knowledge among their readers. Similarly, to generate purely verbal—but intentionally deductively valid—hypothetical derivations, two authors might choose a different set of stated “simplifying assumptions.” Is there some rule or principle that informs this choice? I didn’t have one. So, again, I gave them the best advice I could come up with: since ultimately, their job is to come up with theories that are useful to some set of people, and since intersubjective agreement is the heart of deductive validity, I told them they needed to always keep their audience in mind when making these choices. That is, I told them they need exactly as many stated assumptions as would be required to persuade their audience that their theoretical implications necessarily follow from their assumptions.
This doesn’t quite get me all the way caught up, as I’ve already taught weeks six and seven. But since Will hasn’t yet blogged about his experience with those lessons, I’ll leave those topics for my next omnibus post.