Tuesday, October 22, 2013

So you don't find law reviews valuable [yawn]

Adam Liptak reports that law review articles are not valuable.  Thanks to my buddy C__ for sharing the link.  And, yawn.

I have tended to appreciate Adam Liptak's Supreme Court reporting.  For example, I learned from him the other day that the Roberts Court is not terribly activist, under the most defensible definition (willingness to rule legislation unconstitutional).  At least it's no Rehnquist Court.

What's wrong with Liptak's recent column?  First, the subject is dullsville.  So Roberts doesn't appreciate legal scholarship.  He also said that to my faculty when speaking here a few years back.  Second, Liptak takes a shotgun to the subject.  I count four problems with scholarship in his view, none of which is developed sufficiently to constitute an argument.  Third, in a few places it is demonstrably wrong.

Judges do not find legal scholarship helpful:  a quick search in the Westlaw sct database finds 1300 Supreme Court opinions with the phrase "l. rev." or "l.j.".  Expanding the search to include the cta database causes it to time out at 10,000.  Some, but not many, of those are in opinions by Chief Justice Roberts himself.  30 seconds of perusal suggests false positives are not a major concern.  Not helpful?

Liptak reports that 37% of Supreme Court opinions cite to law reviews.  That's down from 50% a few decades ago.  Meaningful legal scholarship is not exactly an endangered species, even if it does find itself competing for judges' attention with other sources of information.  Amicus curiae briefs, anyone?  I'd be curious whether the decline in law review citations tracks an increase in amicus participation, with many of those papers submitted by academic amici.  (A not-unrelated query:  how many law review articles start as amicus briefs or amicus briefs start as law review articles?)

Liptak references but does not analyze several complaints.  Student editors.  Arcane theoretical topics.  Publication of articles by faculty at the same school that publishes the review.  Reliance on heuristics like the writer's CV.  40% of articles never cited once.

I have a few responses to those.  I too complain about student editors and I too game the system in submissions -- adding a line like "this is the only article ever to challenge the orthodox view!", only to take it out in the editing process.  But my complaints about student editors ultimately all turn on their not picking my articles to publish.  On the day that Harvard Law Review comes calling, I will confess my prior error and the genius of the student editing process.  As, I propose, will we all.  [OK, at least one of us at runningprofs has had that sort of success in placement recently, but I won't put him on the spot here!]

I'd need to see empirics to back up the idea that student editors can't pick good articles.  My guess is that peer-selected and -reviewed articles are just as subject to weakness as are student-elected and -edited articles.  There are certainly many examples, some of spoofs, some of real flawed scholarship, appearing in even serious peer-reviewed journals.  [Here.  And here.  And here. And the most famous spoof here.]  Too, a large number of law journals are moving either to pure peer review or a half peer review process.  My tenure dossier, up for its first vote today, contains four pieces (two that I deem "articles") that were peer selected and edited, and I've been asked on more than one occasion to offer a fairly casual (and unpaid, I might add) peer review of submissions to the student-edited Stanford Law Review.  Finally, as a practical matter, peer review is a sliding scale.  How many articles are published these days without some amount of workshopping or sending around for comment?

Arcane theoretical topics: law reviews are like the proverbial infinite monkeys on typewriters.  Some matter.  Some don't.  None of us know ex ante which is which.  Example:  "let us restate all of antitrust law in the language of neo-classical economics!  Let's not stop there -- there's so much law to be restated!"  (Chicago School.)  Or:  "I have an idea:  maybe we should read statutes without paying any attention to what the legislature actually meant!"  (Textualism.)  This response also addresses the apparent complaint that much that is written is never even cited (or, presumably, read).  So long as enough is written, there will be plenty of relevance to go around.

I find myself wondering whether judging wouldn't be improved if Roberts et al. dismounted the high horse and did some background reading.  Judge Posner famously confessed error in his 2007 voter ID opinion from my state.  It's more than a little interesting to me that many articles preceding that opinion had proposed racially discriminatory intent and effect of voter ID laws.  A two-minute Westlaw search, for example, produced at least two 40th Anniversary symposia (Howard, South Carolina) on the 1965 Voting Rights Act, published in law reviews in 2006.  Some of those articles discuss voter ID laws and their racially discriminatory effects.

Reliance on heuristics like the writer's CV is just reality.  I do it when I select what to read.  (To be clear, my heuristic may be slightly more relevant, as the CV entry that most interests me is "what else did this author write" -- but it's the same idea.)  And not every law review operates this way.  The Harvard Law Review has a blind submission process.  Others may as well.

And in a few places Liptak's column is demonstrably wrong (or at least misleading).  I've already addressed the claim that judges don't find reviews helpful.  How about Liptak's final paragraph?  He quotes Seth Waxman, a leading Supreme Court advocate, as having said 11 years ago that "only a true naif would blunder to mention [a law review article] at oral argument" before the Court.  Perhaps in that particular theater there are traditions to be observed, but I'll counter with my own experience.  Having served as initial drafter on approximately 20 briefs and petitions to the Supreme Court in my short appellate career, I can tell you that not one of those papers failed to cite at least one -- and in many cases, several -- law review articles.  Not one such filing was ever bounced or publicly derided and, on once investigating, I flattered myself that more than one had an apparent influence on some part or another of the Court's opinion in a particular case.


4 comments:

  1. Brings back fond memories of my days as an Articles Editor for the University of Chicago Law Review. In keeping with the school's apparent ethos of perversity -- "Why center the curve on a B+ when we could center it on a C?" -- we had only three Articles Editors, compared to ~10 for many comparable journals. And we received several thousand submissions per year, each of which one of us was supposed to screen. A few points come to mind here.

    (1) It is literally impossible to read all of them, or even most of them. Scan them, sure, and maybe a few paragraphs, but nothing more. Heuristics were a survival tool.

    (2) Having said that, 95-98% of articles are easy to rule out. The overall level of scholarship is very bad. Or, at least, it's not what the elite journals are looking for. We got countless articles that did little more than summarize a recent Supreme Court case or Court X's 2001 habeas petitions.

    (3) Oddly, as a current appellate litigator, the articles I now find most helpful are those that I never would have given a second glance when I was in charge of accepting articles. Perhaps the most valuable are ones that provide statistical analysis of a phenomenon -- that sort of thing is invaluable when you're trying to make certain kinds of points. But it's often thought too workmanlike for the top journals, which are more interested in high-minded concept pieces.

    (4) One of the most valuable heuristics we used had nothing to do with an author's pedigree. Instead, it was the content of the cover letter. The ones affixed to articles we selected almost always consisted of one sentence saying something like, "I would appreciate your considering the attached article." When a cover letter attempted to summarize the piece, it was the sign that the article couldn't stand on its own. This wasn't just my observation, but a time-honored thought that was passed down from outgoing articles editors to incoming ones.

    (5) At Chicago, I'd like to think we were pretty good writers and bright people. We can polish up writing -- some smart professors were not the most brilliant writers -- and offer random thoughts for clarity. Maybe this is a dissatisfying system, but I wonder whether it's any better than a clerk's summary of briefs for a judge. At the moment I write briefs in tax cases, for federal appellate courts, an area in which many judges lack expertise and one in which all clerks lack a meaningful background. This isn't such a different situation from an articles editor reading scholarship on an obscure subject. It's not a perfect system, but I don't know what a better one might be.

    (6) Peer review seems kind of impossible. Even if we sought peer opinions of only 10% of the articles we received, that would be hundreds a year. And, for the more competitive journals, that's tantamount to unilateral disarmament in the competition for the handful of articles each year that are self-evidently brilliant.

    ReplyDelete
    Replies
    1. So, this is helpful advice: "When a cover letter attempted to summarize the piece, it was the sign that the article couldn't stand on its own. This wasn't just my observation, but a time-honored thought that was passed down from outgoing articles editors to incoming ones." Same story for "this article has been discussed on such-and-such a blog"?

      Delete
  2. I don't know. Blogs weren't around in 2001! But it is a nearly universal trait that famous scholars have a cover letter that basically communicates the message, "I'm suffering the indignity of writing a cover letter only because it's expected. Now read my article." This is true to such an extent that, when I received such a letter from a person I'd never heard of, he or she almost always would turn out to be frequently published. I would say no more than a very short paragraph that makes it clear that you're not begging for attention. I don't think a blog mention would be harmful to point out as long as it's heavy-hitting. SCOTUSblog, fine. My blog on the Alaska ride, less so.

    Of course, I'm willing to believe that my broad guidance might not hold for the less selective journals, which may include subject-matter journals at the top schools. My sense is that those journals are more willing to publish articles that are helpful, rather than holding out for ones that are profound. A rational strategy might be to have a "top 20 law review" cover letter that says very little, and then another one for broader audiences. I'd say, though, that in no case should it be more than a couple of moderate paragraphs long, and it should eschew hyperbole.

    ReplyDelete
  3. Gotta say, I've had pretty good luck with my placements, and am a bit of a maximalist with the cover letter. I suspect that if you are Bruce Ackerman you can get away with the short cover letter, but if you are Ted or Max or Spencer, with middle tier letterhead, you need to get the editors' attention.

    That said, the true reality is that for us to get our articles even pulled off the stack at a top 10 journal, you have to be expediting from a top 50 journal. Once you are expediting, I doubt the cover letter matters much. At that point, other proxies for quality have kicked in . . .

    ReplyDelete