(Caution: This piece shouldn’t be taken as an excuse to ignore the structural issues in our larger society regarding class, ideology, race, and gender, among others. Jesus constantly advocated for subaltern groups in Roman Palestine, and we are called to follow his example. I speak about specific ideological forces in academic settings that I see as both unbiblical and incoherent.)
A number of years ago now, I wrote an article describing why I left my university’s English department and switched to history. It wasn’t an eloquent article, and I sometimes wish I could retract it, if only for the inconvenience it caused me. I never took a class in the English department again even though my dream at the time was to become a novelist; I knew I was a marked man. I burned my bridges and walked forward into a very different future
That article was the first of two critical revelations I’ve had about university (I’ll get to the second later). An angry old Humanities professor named Ron Srigley spelled it out far more eloquently than me two years ago in the LA Review of Books. His argument is that in third and fourth tier universities across the country, the humanities are no longer being taught or learned. This is caused by a number of common complaints, including expanding technocratic culture in academia, an explosion in administrative personnel who increasingly view professors as peripheral to the core needs of their “consumers” (students), and the increasing power of students to demand changes to their educational and university experiences.
Srigley explores a couple of points that I touched on in my article, but didn’t fully understand. This first is what I’ve referred to as the “bullshit factor,” or the ability that my English major friends and I believed we possessed to “bamboozle” our professors with our sparkling prose and strikingly original analysis. It took me into my fourth year to realize that, in my arrogance, I hadn’t realized who was playing who. The professors saw right through our bullshit, but for various reasons were unwilling to call us on it. Instead they coddled us, encouraged us, praised us – and awarded us grades we didn’t deserve.
Srigley (who is harsher than I) calls it a scam. A scam that is propagated by a lack of support for professors in the Humanities, the consequences of grade appeals to the careers of (especially junior) faculty, and a prolonged erosion of Humanities departments in favour of professional faculties and technical training.
He describes such education as the equivalent of the Russian political system, which is a ruse meant to give the appearance that the totalitarian regime beneath is playing by some semblance of democracy. Such shams don’t need to be convincing, but they do need to be entertaining.
“That’s my classes. There is no real education anymore, but I still have to create the impression that education is happening. Students will therefore come to class, but they will not learn. Professors will give lectures, but they will not teach. Students will receive grades, but they will not earn them. Awards and degrees will be granted, but they will exist only on paper.”
This leads to the second point I once made: that students no longer need to actually read the material to get impressive grades, which contributes to both student and administrator scorn for the affected disciplines. This point caused some push-back, since professors and fellow students noted that if I wasn’t reading the material, it was my own fault for not getting the full benefit of the course. I agreed, but countered that if the difference between my reading very little of the material instead of it all was a 10 to 15 percent bump in my final grade, what did that imply about the value of said material to the course? Srigley argues that less than 20 percent of his students even access the weekly readings for his courses, largely because they know they don’t have to – “they can get an 80 without ever opening a book.”
When I entered university at 19, my professors, my peers, and my grades affirmed that I was a good student, although I didn’t do much of the reading, and felt that I was often bullshitting in class. It took me a while to realize that I wasn’t brilliant, just arrogant, and (as Srigley suggests) the professors, my peers, and I were all being played.
The second critical revelation I had about university came later, in the first year of my masters program at a different school. I was forced to take a sociology class from a post-modernist, and discovered that we essentially didn’t speak the same language. It was a seminar class with few students, so I was forced to talk…a lot.
And everything I said was wrong, in one way or another. Wrong. Wrong. Wrong. The professor and I would look at each with painful incomprehension, and I would realize that I’d once again assumed some archaic concept like “truth” or “facts.” I learned that there are no absolutes except structures of power (especially those of race, gender, and sexuality) and the sacredness of complexity – all else should viewed with cynicism and deconstructed.
I’m used to being viewed as an anachronism, an academic who has held on to his faith despite the “enlightening” influences of the secular university. But that class was my first experience in which the fundaments of my ontology and epistemology (what we know and how we know it) were utterly alienating. It was heavily implied by other post-modernists that my conservatism created an ideological rigidity which prevented me from understanding other viewpoints. I’d heard this argument before, and I’d mostly bought it, but after that class I began to ask serious questions.
It’s considered cliché in religious circles to point out the postmodernist paradox that there are no absolutes (except that one). This isn’t really an accurate characterization, as there are other sacred shibboleths that cannot really be interrogated, but are integral to the post-modern perspective. It used to deeply upset me to argue with people who claimed to be wholly committed to deconstructing modern society, and yet were entirely discriminatory about which artifices required deconstructing. As an outsider to academia, I figured that I was missing something, or that my religious convictions made me incapable of closing with the argument.
Actually, my failing was not looking a little closer to home for an explanation. As William Deresiewicz explained a couple weeks ago in The American Scholar, the fusion of postmodernist thought and identity politics manifesting at elite schools and seeping through the system is also a form of religion. It encompasses its own belief systems, theological foundations, and moral absolutes. What I’d failed to realize was that my encounter in my sociology class was not with an academic theory, but with another religion – one that also believed in unmeasurable, yet omnipotent and omnipresent forces (although generally ending in –isms)
According to Deresiewicz, this academic religion also has its own dogma or set of correct opinions and beliefs within a very narrow range: “There is a right way to think and a right way to talk, and also a right set of things to talk and think about.” Secularism is assumed, environmentalism is sacred. Issues of identity (holy trinity of race, gender, and sexuality, are the centre of concern. Foucault plays the role Marx once did.
Students are encouraged to emulate the New York Times Opinion section (jab), in that they are highly diverse physically, but all but homogenous ideologically. They are different bodies speaking with the same voice. Divergent opinions are treated as the opposite of dogma, heresy, which “must be eradicated: by education, by reeducation—if necessary, by censorship.” The opinion of individuals are expected to be based upon their identity, and can actually be presumed based upon their identity. Therefore, what someone says is not as important as what they are. The importance of someone’s voice is based upon identity rather than content.
This is of course meant to help offset the omnipresent and omnipotent “-isms” that form the core deities (or demons) of this new religion, but in practice it simply elevates the oppressed to the role of oppressor, without creating the foundation for actual systematic equality. In the words of Deresiewicz, “Progressive faculty and students at selective private colleges will often say that they want to dismantle the hierarchies of power that persist in society at large. Their actions often suggest that in fact they would like to invert them. All groups are equal, but some are more equal than others.”
At my institution, things are not as bad as Deresiewicz describes, and don’t deserve the gross generalizations I made above. I hear the language and jargon of said religion, but it has not been accepted or internalized at a dangerous level. As someone who takes my own religious tradition seriously, I could give some excellent advice about the dangers of rigid theology and polarizing dogma, but I doubt that would be considered helpful. The cautions of the barbarians have never been heeded by those who believe they possess a divine mission to forcefully enlighten humanity, and academics are no exception.
Together, my two critical revelations highlight a disturbing trend. Humanities instruction is become increasingly vapid, humanities students are becoming increasingly ignorant about their own disciplines, and the gap is being filled by a theological conviction that both possess a sacred monopoly on truth. Obviously Srigley and Deresiewicz are discussing two very different contexts, and the two forces are not fully realized nor truly integrated, but both articles spoke to me in a way that is worrisome.
There is truth to both pieces. Not absolute truth, not universal truth, but the description of dark and disturbing potential.
My friend Mannonfire wrote a more critical perspective on the second article here. We had some fun in the comment stream as well. 🙂
Very interesting Paul. I’m just about to read the two articles now.
I agree about the religious nature of this–but that’s also true of lots of things. For example, I don’t know anything about evolution except what I’ve been taught and I believe those who teach me. They say that their knowledge is evidence based and I take that on faith, since I don’t have time to do the evidence myself.
I also think that there is a consensus building–helped by Donald Trump of all things–that the real issue that is going on with “Political Correctness”–or the thing that needs to be fixed at any rate–is the focus on identifying and rooting out heterodoxy (again an appropriate religious term). That is to say the fear that people get that they are about to say something racist, or just plain wrong, despite not intending to be that way, and be criticised or cast out for that mistake. The reason Donald Trump has helped is that his anti-political correctness has helped identify this problem by being of the other kind: in his case by “not political correct” he means “free to be knowingly insulting and rude” rather than “perhaps inadvertently offensive, or unknowing, or ignorant of some meta or para aspect of my discourse.” The first kind of anti-PC is gaining traction because the second kind has been treated so absolutely recently. We generally need to distinguish well-meaning/inadvertent/mistaken non-PC speech from deliberately confrontational.
The big thing in your piece though is that I would argue that you misunderstand what grades are in relation to education (you are not alone–most people in education do).
There are different ways of understanding what grades are and what they do. Historically, systemically, motivationally, and so on. Historically, they are a very recent phenomenon–no more (in a serious systemic fashion) than say 150-200 years old. They really start appearing in the U.S. college system, for example, after the U.S. Civil War, and we are well into the 20th century before they are codified into the A-F system used in North America (I think I got this from Veysey, Laurence R. 1974. The Emergence of the American University. 3. impr. Chicago: Univ. of Chicago Press). The real impetus comes from Harvard in the 1890s (like so much of the Modern North American university system), and Johns Hopkins and the University of Michigan, as early research universities. Interestingly, there are complaints about grade inflation almost immediately after the introduction of grades at Harvard.
Grades are introduced in part due to the broadening of the University and the rise of professional occupations after the civil war. And the reason is that at that point you need to be able to say how well young men (pretty much always young men at this point) did in order to decide whether to hire them or not or given them competitive benefits (in the U.K. about this time, you start getting the civil service exam–which Anthony Trollope hated with a passion–for the same reason). Before that, you didn’t need to grade people. The point was to educate them to the culture of their class, but they didn’t need to be approved or certified for anything other than social reasons. Interestingly, the earliest grades at Yale (as I remember again from Vasey I think) were simply the distinguished ones: i.e. you’d have a record if you were exceptional in some way, but otherwise you just were.
Given all this, then the way to understand grades systemically is that they are really bits of information in what has become in essence an expert system for classifying students. Expert systems work by attaching calculable values to qualitative decisions made by experts so that you can design machines to support decision making. An example might be a decision support machine in an airplane that spits out various options to a pilot in response to a situation. The responses are calculated based on ratings for different options given by experts previously, which can now be calculated to summarise their likely opinions in response to the current situation (this is a very simplified explanation).
Grades work the same way: I get a student in my Old English class whom I determine in my capacity as an expert to be an excellent Anglo-Saxonist. A mathematician two years later gets the same student in a calculus class and determines in their expert opinion that they are pretty good at calculus. And the photography professor decides they are pretty good at photography. The only way of summarising these (otherwise incompatible) decisions and ranking this student in comparison to other students who took equally incomparable courses, is to convert the expert opinion to grades and calculate a GPA.
The important thing in all this is that grades are not learning or knowledge. They are a way of comparing professors’ opinions of how much learning as student did in otherwise incomparable circumstances and then later calculating an answer to the completely different question of “how well did this person do at university relative to others”?
So this brings us back the “bullshit factor.”
It is very clever to realise that it might be the students who are the dupes in all this. But I think it is a mistake to see the unwillingness to “call” bullshit on the professors’ part as evidence of decline or collapse. Another answer is that it’s none of their business. Their interest is promoting student learning. Students who focus on the cost-benefit relationship between effort-to-learn and likelihood-of-getting-a-good-grade are not really in the learning business, they are trying to game the expert system. They may be in the professor’s class, but they aren’t really there for the thing the professor is actually offering. Indeed until about the time of the U.S. Civil war, there’d be literally no reason for them to be in University at all, since the thing they are there for (i.e. a good grade) didn’t really exist.
So what do you do about students like this?
I suppose you could give out bad grades to such students and “call” their bullshit. But I’d ask why you are bothering. At best, all you are doing is teaching them that you need to do a better job of gaming the expert system: you haven’t changed the fact that grades are what they are shooting for; what you are doing is teaching them that gaming the system requires more effort than they are currently offering (i.e. that the one form of bullshit they were doing for grades is less effective than a different form of bullshit in which they fake sincerity).
The alternative is to refuse to play their game–that is to refuse to accept that your primary business is grading rather than teaching and mentoring.
I think that’s what’s going on with most of the bullshit cycle you talk about–students getting grades they know are fake for work the professor knew was fake. I think that many professors, without really thinking it through systematically, are actually reproducing the pre-civil war system of “cum laude” grading–where you got a “grade” only if you stood out but otherwise just were. In other words, that what they are doing is giving everybody basically the same grade and reserving a bit at the very top–in terms of grades and letters of reference–for the ones who actually stand out for good work.
The only problem with this that I can see is that it doesn’t attack the actual problem: which is students who are wasting their time because they are confusing grades with learning (I don’t consider grade inflation to be a problem, since people have been complaining about it literally for as long as there have been grades and because it is really an example of the Golden Age fallacy).
What I’ve started doing is attacking this problem by removing it all together. I now give everybody the same grade for any term work they hand in (nominally 100%) while reserving about 20% of the grade for 3% or so badges that I assign for (rare) examples of “excellent” work. And I explicitly don’t care if you do the reading, follow the assignment, or do anything else on the theory that isn’t my place to tell you how you should spend your time or what helps you learn better–not doing the reading is as valid a thing to be doing with your life as doing the reading, provided you don’t care about the opportunity cost.
I do give letter grades to the work that is handed in–so that if you are interested you can see what I thought about it qualitatively using a notation system that people seem to prefer (I’d personally rather just comment on it). Those grades don’t count for anything, but they are predicative of what you will get on the final assignments (which are graded summatively using a real A-F system that counts) if you hand in the same kind of work. These final assignments are also worth about 50% of the course grade and represent my expert opinion about how well you mastered the course material (I do this for the University and because the good students need differential grades to get into grad school, but, again, I wouldn’t if I didn’t have to or it wasn’t so preferred by the students).
What’s interesting about this is that it reverses the value proposition involved in bullshitting for grades. Bullshitting is no longer about whether or not you can get something (a grade) for nothing (i.e. work) because you are the paying customer, but is now about whether you are getting nothing (i.e. knowledge) out of a course you are the paying customer for. When students come asking me if they can do something bullshitty for their assignments, I always tell them that they can–and in fact should–if they feel that that’s the best use of their time. Whether they do what they are asking, or something else, will have no effect on the grade they receive for the assignment, though it does conceivably reduce the amount of supervised practice they will have in preparation for the final summative assignments.
Bizarrely, this also results in tougher and more honest grading. Because everybody gets 100% on their term work, I get no grade grubbing from students regardless of the advisory letter grade I give, though I do get some students come and ask how they can improve their grade on subsequent assignments. And instead of worrying about disappointing students by giving them a bad advisory grade (a major factor in grade inflation in my view), I feel obliged in fact to given them as reasonable and accurate as possible an advisory grade on term work, because they are using this presumably to estimate what they need to improve for the final assignments (Instead of it being in the student’s interest that I inflate my letter grades, in this case it is in their interest that I deflate them).
And then finally, when it comes to the final assignments, I’ve already established a pretty tough grading expectation: having established a tough C in the term work, it is much easier to continue that through the final assignment.
The result has been that my last three classes graded this way, the curve has been about 1/3~2/3 of a letter grade lower than the average for the year at the U of L. By giving away about 1/2 the grades, I actually end up having to bell grades if I want to hit the institutional average.