The first four weeks of my Scientific Inquiry–Theory & Inference course covers being successful in graduate school, human knowledge (ontology/epistemology, and what is science?), and week one of the theory section explores the purpose of theory. Week five provides two building blocks for theory: Concepts / Conceptualization, and Assumptions & Logical Implications. We establish during week four that the course limits its attention to theory developed to explain why stylized facts occur, which is to say provide accounts of the processes that produce stylized facts.
Framing Discussion; A Historical Digression on Education in the US; and Audience, Audience, Audience
To kickoff seminar I told them that I wanted them to have a 45 minute discussion among themselves (I just listen and take notes) focusing some attention on the ontological / epistemological assumptions they believed the various authors were (implicitly) adopting. Then I offered what I thought would be a quick digression.
I assign a fair amount of reading from Cohen & Nagel (1934) in the course, and if you have never read it, their presentation is pretty interesting. And it occurred to me prior to class that I should not assume that many of the students have much context in which to read the work, which is definitely a produce of its time because: Audience, Audience, Audience.
So I explained to them that the book was an undergraduate textbook and that they should bear in mind that Cohen & Nagel could assume that their readers’ high school education drew strongly on the Great Books tradition of the Western canon, especially as taught in New England’s prep schools. Well, you might think that I would have reflected on the probability that “the Great Books tradition of the Western canon, especially as taught in New England’s prep schools” is pretty foreign to them, and that a non-trivial portion of them may not even know the movie The Dead Poet’s Society (which is, of course, a caricature as well as a morality tale). But I didn’t.
And a student interjected that she found the authors’ assumption that the reader was familiar with all the stories from antiquity very off putting, and found the book irritating. And I thought: “Teaching Moment.” And noted that they were in the PhD program to transition from knowledge consumers to producers and teachers and put them in the shoes of Cohen & Nagel by pointing out that they would soon have their own classrooms, and just as the first principle of real estate is Location, Location, Location, the first principle of public presentation of one’s ideas is Audience, Audience, Audience. And while the “the Great Books tradition of the Western canon” was exclusionary, ethnocentric, etc., it made writing textbooks much easier, and they could really see that in action in Cohen & Nagel. I encouraged them to reflect, for a moment, upon what they could assume their students would know (pointing out that six years ago, the youngest among them was 12, and probably not terribly engaged in public life beyond popular culture).
Well, hands shot up, and I realized I had shot myself in the foot vis-a-vis discussion of the assigned reading. D’oh! But, a teaching moment is a teaching moment, and that there are lots of goals to pursue in this course, and nowhere near enough time to pursue them all (much less do so well). So I rolled with the moment.
Conceptualization
When I took back the reins of discussion after a brief break I began by observing that this week we stood zero chance of covering all of the issues addressed in the reading, much less exploring those that arose during the course of our discussion. That is a chronic feature of all seminars, but felt especially true that night, given the teaching moment digression and its impact on discussion.
I have them read the first 10 pages of Blalock’s 1969 undergraduate text Theory Construction, and Shively’s undergrad text discussion of the “Importance of Dimensional Thinking.” That sets them up to consume Barton’s 1955 chapter “The Concept of Property-Space in Social Research.”
Cohen & Nagel (1934) discuss “Terms: Their Intension and Extension,” “The Significance of Classification,” and “Rules for Definitions” (pp. 30-33, 223-33, 238-44), and Bailey discusses classification in his Sage monograph, Typologies and Taxonomies, (pp. 1-6, 11-16).
I have unorthodox views on the state of conceptualization in political science. I think we stink. No, that’s too kind. I believe our work in this area is so poor that students are better off not engaging it. I do not have the energy to defend that claim here, nor do I during seminar. So I let them know my view, explain that were they to take the course in most any other PhD program they would either (a) not discuss conceptualization as a distinct building block for theorizing (my guess is this is the modal outcome) or (b) read work that I find detrimental to the field.
To offer some guideposts I identified Aristotle and Weber as arguably the most influential protagonists in our tragedy, and then noted the central role that Weber’s definition of the state plays in my approach to thinking about politics (so they don’t file away “Weber. Moore hates his stuff.” or something similar, as grad students are wont to do). I illustrated briefly with a reference to Ideal Type definitions, which rely on checklists, and suffer from both an absence of dimensionality (aka property space) and the problem of negative definition (everything that is not a member of the ideal type is lumped together as an undifferentiated group, unworthy of positive denotation). Next week I will use Dahl’s two dimensional definition of Polyarchy as a contrast to Ideal Type definitions of Democracy.
Virtually all of the readings identify dimensionality as a central goal of denotative definition. And Cohen & Nagel discuss the weakness of negative definition.
I then reminded them that we have argued that the course limits its attention to theory as explanations of stylized facts (remember, we still haven’t defined causation–it’s coming soon!). So my claim is that while Ideal Type definitions have been, and will continue to be, useful for the production of human knowledge, we should explicitly embrace the norms advocated by Cohen & Nagel, Blalock, and treat concepts that lack dimensionality and/or contain negative definitions[1] as inadequate for the limited purpose of producing theories to explain stylized facts.
I then reminded them that these are unorthodox claims, and were they being trained elsewhere, they would be unlikely to be asked to entertain such views. 🙂
Regrettably, I failed to share with them my claim that any Ideal Type definition can be converted to a useful social science concept (i.e., one with dimensionality) by pursuing these rules (poor time management!).
- Provide a denotative definition of your concept.
- Name it after the dimension (space) over which cases can be assigned, being attentive to both mutual exclusivity and collective exhaustivity.
- Identify whether (each) dimension is defined over ordinal or continuous space, specify the minimum and maximum values, and whether the space between values are ordinal, integer, interval, etc.
If we adopt this checklist as a best practice for concept formation we can understand Ideal Type concepts as conflating dimensions with a dimension’s maximum value. If we reconceptualize Ideal Type concepts as a maximum value we force ourselves to name the dimension over which the value is a maximum. Bam! The negative definition problem disappears as well: we are also forced to provide denotation with respect to the full dimension (property space) and each of its values. Two birds felled with one tripartite best practice. Though none of the authors assigned for this section offer the checklist or discuss this issue, it is consistent with the set of them.
Though I failed to offer them this Conceptual Best Practice Checklist, I did remember to explain that dimensionality is valuable because it ensures that we have concepts that can vary. And in so doing I part company with Clarke & Primo‘s discussion of Models as Maps or, better, placed restrictions beyond “useful for its intended purpose” on the models I want to include in the scientific knowledge community.[2]
If you are thinking “Wait a second. You are privileging probabilistic theory over deterministic theory (e.g., necessary / sufficient types of explanations),” go to the head of the class! Yes, I am doing that. But, I remind you that the discussion of causation is coming up. The Gordian Knot problem strikes again![3]
I also pointed out to the kids who participated in the Math Camp that we discussed the issues in points 1 and 2 during our study of probability, and reminded them of the distinction between classical probability theory (conceptual) and empirical probability theory. I then argued that our discipline suffers from a failure to distinguish the two and mistakenly consider these issues more or less exclusively from an operational perspective.[4] Invoking Jesse Pinkman, “That shit’s conceptual too, yo.”[5]
That set us up for my discussion of typology. I use a dichotomous distinction between typology and concept. For me, a typology is any term denoted or connoted by a scholar that has no clearly defined dimensionality (property space). Though I have not given it adequate thought, I suspect all connotative definitions fall into the typology group of my dichotomy. Ideal Type definitions certainly do. And another commonly proposed “concept” in our field does as well: nominal classification schemes.
Bailey would refer to the nominal classification schemes common in political science as unidimensional taxonomies. I noted that my dichotomous distinction is quite different from Bailey, for whom the most common form of taxonomy is a two dimensional ordered classification that we frequently refer to as a 2×2 classification. Despite the fact that Bailey’s use of terminology does not map well onto practice in political science or my own dichotomous distinction, I embrace the risk of confusion to impress upon them how much effort other social science fields have invested in conceptualization. I asked them whether the selection from Bailey had left them with that impression, and received lots of head nods and a few “You can say that again” facial expressions in response.
I then observed that one could retort that any nominal classification scheme could be readily converted to a series of binary “concepts.” In doing so one would produce a series of Ideal Type “concepts. As noted above, from there one could adopt the
I used Barbara Geddes typology of authoritarian regimes as my punching bag for illustrating nominal classification schemes. I choose it because I envy Geddes’ her intellect, and get to tell the class “If I could switch brains with Geddes, I would do it in a moment. That woman is wicked smart and a really great political scientist.” That is, I am a huge fan of much of Geddes’ work, and I love having conversations with her. Just as I point out the enormous value of some of Weber’s work before pillorying his penchant for Ideal Type definitions, I select Geddes’ regime typology because of the extent to which I can laud.[6]
To get that rolling I asked whether anyone was familiar with the classification scheme, and then asked one of those who is to briefly describe the types so that everyone had a flavor for it. I then asked everyone to identify the number of dimensions (the property space) over which the types could be ordered. When nobody could come up with a conjecture, I asked whether anyone could make offer speculation on a single dimension over which they might be ordered. Again, crickets (and the handful familiar with the typology did make an effort).
I then pointed out that just because neither I nor they could identify one or more dimensions over which the typology might be ordered did not mean that nobody could. Nor did it suggest that Geddes’ authoritarian regime typology was unhelpful for generating knowledge. Though I doubt any of them have (yet) internalized this, that position is not only dramatically at odds with Clarke & Primo’s position (to say nothing of the huge number of political scientists and others who use the scheme in their work), it is contrary to the postmodern / constructivist understanding of human knowledge that I am advancing in the course.
I reminded them that I am arguing only that I believe we can better (more efficiently, and at a more regular, speedy rate with respect to time) accumulate useful explanations of stylized facts, as defined in this course, if we adopt the Conceptual Best Practices Checklist above. If we do so, then my proposed distinction between Concepts and Typologies puts Ideal Type definitions and all other nominal classification schemes in the Typology group. And I maintain that we recognize such efforts as important to the development of human knowledge, but not as useful as Concepts for the production of theories in our scientific knowledge community.[7]
We convey the full argument over the course of the semester. Hence these posts.
Assumptions & Logical Implications
The second set of readings for the week sketch the assumptions and logical implications building blocks. The reading for these topics come from Cohen & Nagel (1934): “The Subject Matter of Logic” and “What is a Proposition?” (pp. 3-16, 21-23, 27-30) and “The Function of Axioms,” “The Deductive Development of Hypotheses,” and “Hypotheses and Scientific Method” (pp. 129-33, 197-222). I also assigned Becky Morton’s sketch of verbal and formal models (Methods and Models, pp. 33-4, 36-43) and Miller & Page’s sketch of computational modeling (Complex Adaptive Systems, pp. 35-43, 57-62). Morton’s book is primarily an account of her vision of EITM, but I assign her discussion of non-formal and formal models. Because formal modeling tends to be reduced to game theory in political science, I also assign Miller & Page, which maps nicely to some of Blalock’s discussion of theorizing (I include an overview of dynamic modeling in the Additional Recommended Reading). Both books also contain some discussion of the role/value of explicit assumptions and logical implications.
During seminar none of the students raised any of the issues broached in the Morton or Miller & Page reading, and I didn’t manage time well enough to work any in. But I did ask them for a show of hands if they believed that a social science theory should be logically coherent. Then I asked them why.
Crickets.
So I challenged them, saying something along the lines of:
Surely you must be able to come up with at least one reason logic is valuable? Or have you perhaps just accepted its value on authority? Do you believe that just because teachers and others have told you so for years?
Several students made a stumbling effort to articulate a reason why we should value a theory that proceeds logically from assumptions to implications. None did terribly well. There were lots of uncomfortable smiles, as if they knew in their bones that logic was important, but were tongue-tied to say why.
I had planned to lead a discussion about the value of logic from the perspective of intersubjective agreement, from Wittgenstein’s Beetle in the Box, to the preceding discussion of conceptualization, to producing descriptions of the process(es) that explain how and why stylized facts emerge in our collective experiences such that we can reproduce them such that large numbers of humans can recognize them (e.g., datasets, experiments, and so on). But my prompting query failed to elicit that response. Who’dathunkit!?! I don’t know about you, but my prompting queries often strike students more as anchors than life jackets. ¯\_(ツ)_/¯
I was running out of time, and don’t honestly remember what I threw on the table in a disorganized effort to touch briefly on the several issues I’d planned to cover, but no longer had time to. I did point out that if one adopts a post-modern ontological position and/or Models as Maps approach, then it becomes nonsense to ask whether assumptions are true, realistic, what have you. I think I also reminded the students who’d taken the Math Camp about the value of Bayesian updating as decision making in the face of uncertainty, and observed its consistency with/similarity to Cohen & Nagel’s discussion of probabilistic inference.
But I mostly wanted us to bandy about the claim that logical coherence should be the primary criterion we use to trim the infinite number of natural language text strings that might be considered “theories” to a reasonable set that we, the scientific knowledge community, will consider carefully. Cohen & Nagel (1934) are helpful here with this passage.
The structure of the proposition must… be expressed and communicated by an appropriate structure of the symbols, so that not every combination of symbols can convey a proposition. “John rat blue Jones,” “Walking sat eat very,” are not symbols expressing propositions, but simply nonsense, unless indeed we are employing a code of some sort (pp. 27-8).
I embellish it with this thought experiment. Imagine that a monkey bangs out 12 lines of text on a keyboard. Do we want a criterion that permits us to rule out the “argument” thus produced without having to appeal to the cumbersome process of intersubjective agreement?
Yes. One of the central claims about the use value of developing a scientific knowledge community is to produce efficient, progressively superior explanations of the stylized facts we observe. We will discuss what “progressively superior” means in a few weeks. The Gordian Knot problem never sleeps. Put another way, we advocate embracing the formal rules of logically consistency and completeness from assumptions to implications as a specific norm that demarcates theory from non-theory on the grounds that it will enhance the efficient production of progressively superior explanations of the stylized facts.
It seems to me to be a common conjecture among political scientists that adopting a post-modern ontological position and constructivist perspective produces “an absence of standards” and makes it impossible to assess the use value of different explanations. Indeed, some conjecture along these lines is surely modal in our discipline, and likely a supermajority position. This course challenges that conjecture and embeds the full set of issues presently debated about how we should do science within a common framework built upon the foundation laid in the preceding weeks.
Outro
To prepare for class I generally scrawl on the white board wall of my office, take a photo of it with my iPad, and then refer to it during seminar. Though my handwriting is deplorable, I share those notes here, in case they might prove useful filling in some blanks with respect to my discussion.

Notes for Building Blocks seminar.
@WilHMoo
[1] These two might well be linked. I haven’t given it sufficient thought, though surely others have. I may well have read about this somewhere, and simply forgotten.
[2] I am aware that this Best Practice Checklist rules out constants, for instance. And truth be told, it is silly to rule out constants as stylized fact by fiat. Which is to say, if pushed, I will cave on this point. But few, if any, first year PhD students are prepared to take on that issue, so I am quite comfortable with the hand waving.
[3] And I will tell them then that I leave the deterministic theorizing to others because I am much better at probabilistic theory constructions and assessment of probabilistic hypotheses than I am at deterministic theorizing and hypothesis assessment.
[4] During Math Camp I illustrated the anti-intellectual position of objecting to the study of mathematics by social scientists as akin to objecting to the formal study of musical notation on the grounds that the formalization produces lots of negative outcomes. As a huge fan of the African polyrhythms, percussion, the blues, rock and roll, ska, reggae, dancehall, DJ, dub, rap, hip hop, and electronica from house to trance to dubstep, I am well aware of the fact that none of these forms owe their existence to formal musical notation, and that the formalization has been used as a tool of culturicide. Yup, I got all that. But it just doesn’t follow that the formal language is the cause of oppression, nor that it would be a bad idea to study it and use it to produce music. I generally don’t like illustration via analogy, but I do like that one.
[5] And here I pull a Cohen & Nagel by assuming that my audience is familiar with Breaking Bad. #AintNoWayRoundAudienceAudienceAudience #PopCultureHegemony 😉
[6] I have nothing positive to say about Aristotle’s work
[7] For the record, my examination fields for the PhD were Comparative (major), IR and Methods (minors), and my Dissertation co-chairs (Ted Gurr and Jim Scarritt) self identified as, and were members of, the Comparative field at CU. I have read virtually every English piece on the so-called Comparative Method published between 1965 and circa 1998 (as well as a good chunk of the Comparative Sociology lit), when I more or less threw in the towel trying to keep up with the Alice in Wonderland world of that exasperating literature. I also co-taught the Comparative Core seminar at UC, Riverside four or five times during the early to mid-90s. So, yes, I am aware that these be fightin’ words! I will also observe, as an aside, that Jonathan Nagler and I proposed, circa 1994, that the UCR Dept of Political Science (where we were faculty) abandon the American, Comparative, and IR fields, and reconstitute the department over two fields: Political Behavior and Political Institutions. During graduate school I decided that our fields are anachronistic, path dependent nonsense that we would do very well to cast into the sea, and have as yet seen no reason to update that belief.
Pingback: What is knowing, what is science, what is theory? | Will Opines
Pingback: Week 6: Exemplars | Will Opines