Do you really want to know "How Professors Think?"

Sure you do!  Check out this new book (via Inside Higher Ed) by Michèle Lamont, How Professors Think: Inside the Curious World of Academic Judgment.  The author, a sociologist, was permitted to observe the review of grant applications at several different and prominent funding agencies, and she has concluded that:

As for excellence, that quality that peer review theoretically promotes, Lamont isn’t so sure it exists. It may be invoked all the time, she said in an interview, but her examination of the process suggests no way to measure it. “I think excellence means nothing,” she said, suggesting that panels be honest about the criteria they use. “I think you have to give the criteria. Typically it’s originality, feasibility, and also the social and intellectual significance.” There is nothing wrong with those definitions per se, she said, but people shouldn’t pretend they equate with some scientific measure of excellence, as other criteria could be used as well.

The most common flaw she documents is a pattern of professors applying very personal interests to evaluating the work before them. “People define what is exciting as what speaks to their own personal interest, and their own research,” she said.

That’s probably not so surprising–most of us who win grants probably didn’t write proposals that were all that much better than the unfunded proposals–we just got luckier in terms of who read our applications and helped move them on up the line.  There are an awful lot of smart, hardworking people out there (like my commenters!), and there aren’t all that many grant programs or fellowships.  Perhaps more interestingly, Lamont found that “professors in different disciplines take very different approaches to decision making.  The gap between humanities and social sciences scholars is as large as anything C.P. Snow saw between the humanities and the hard sciences.”   

Many humanities professors, she writes, “rank what promises to be ‘fascinating’ above what may turn out to be ‘true.’ ” She quotes an English professor she observed explaining the value of a particular project: “My thing is, even if it doesn’t work, I think it will provoke really fascinating conversations. So I was really not interested in whether it’s true or not.”

In contrast, Lamont quotes a political scientist on what he values in proposals he reviews: “Validity is one, and you might say parsimony is another, I think that’s relatively important, but not nearly as important as validity. It’s the notion that a good theory is one that maximizes the ratio between the information that is captured in the independent variable and the information that is captured in the prediction, in the dependent variable.”

Uhhh. . . yeah.  Who the hell talks about their research this way?  I don’t know if she quoted this guy because she (as a social scientist) admired his clear and precise language (hint:  I’m kidding here!), or because she thought his statement is laughable on its face, but I’d like to stand up for supporting interesting projects that may or may not lead directly to the exact research findings and arguments as laid out in the proposal.  Yes, I value the “fascinating” over the “true,” because I assume that we’re all adults here, and that we understand that the nature of “truth” is contingent and consensual.  I don’t go looking for “truth” in scholarship–just for arguments that are backed up by deep and creative research and due diligence with the historiography.  It is entirely responsible and reasonable to change one’s argument as one completes more research–in fact, that’s the only honest way for a scholar to proceed–even if it doesn’t “maximize the ratio between the information that is captured in the independent variable and the information that is captured in the prediction, in the dependent variable.”  People who ask “fascinating” questions tend to come up with fascinating answers, even if they’re not the ones they thought they’d come up with originally.  And that’s the “truth.”

0 thoughts on “Do you really want to know "How Professors Think?"

  1. This was a fascinating article, but not surprising. Anytime you get a group of people selecting who gets a grant, I just assume that objectivity doesn’t exist.

    I do agree that if a subject is fascinating, it has merit. As for truth, sometimes it takes generations for this to emerge.

    Reseach indicates that people make much better decisions collectively, than they do when a small group is involved.

    Because we know that professors and “peer” reviews are highly subjective, I think this argues for more not less diversity on the selection committees to begin with.


  2. funny, I do talk about my research in terms of validity b/c often I am asked to. I have a whole memorized shpeel about how validity was achieved and whenever I don’t take it out for the humanities conferences I go to, I watch eyes glaze over. Whenever I don’t put it in at social science conferences . . . Someone should write a book: Discipline Matters.

    That said, I agree, interesting projects whether they pan out or not should be among the projects funded b/c it is the questions that move us to think not just the shiny outcomes


  3. Except for a grant *renewal* application, the reviewer is not likely to *know* very much about how much “truth” is “captured” in either variable, right? All you can do is try to look at the plausibility of the proposal. All other things being relatively equal, it stands to reason that the more interesting ones in the eyes of the panel member will get the nod. The other alternative I guess would be to have a big rolling drum like the ones they use to pull out lottery winners and the like. As for the obtaining of awards, the best advice is probably to just keep shooting. I recently served on a preliminary review panel, the obligation of which was one price of having won one of the awards the previous year. I ended up feeling shocked that I had won, but you do come to realize that it’s a pretty capricious process. On either side of the ball, all you can do is try to do a conscientious job.


  4. Ok, I’ve got to chime in because those two versions of what “research” is that are quoted… Well, let’s just say that if a social scientist (or a political scientist specifically) has research that is rooted in quantitative methodologies/data, it is my experience that this is exactly how they talk about their research and what makes “good” research. Yes, the one political scientist I know of this stripe does believe in “truth” and in things like a “good theory is one of that maximizes the ratio between the information that is captured in the independent variable and the information that is captured in the prediction, in the dependent variable.” I’ll also say that this was how the quantitatively oriented psychology PHD that I dated for a time talked this way, too. So I’m inclined to think that the divide represented here is not a joke – it represents a very real divide between and within disciplines.


Let me have it!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s