Author Archives: slindquist

More on Selection Bias

In my last blog, I noted the problem of selection bias in studying appellate court decisions and also pointed out that Harold Spaeth is coding a sample of denied certiorari petitions that will help us assess the degree to which this problem exists at the SCOTUS. In a recent email from him, he let me know that the Burger Court sample is completed (available at the South Carolina JURI site), and that:
“I am not quite finished coding Blackmun’s docket sheets on the Rehnquist Court (1986-94), but I expect to finish this database by early Fall. It will contain a sample of the Court’s denied petitions allowing individuals to ascertain the proportion of petitions dealing with various legal and constitutional provisions that the Court accepted. No longer will any basis exist for selecting on the dependent variable. The direction of the denied petitions is also provided along with other more or less pertinent data.”
Thanks to Harold for this information.
Of course, this does not solve the problem of selection bias at the U.S. Courts of Appeals or even the trial courts, since the concern could be expressed that disputes settle and therefore there is selection bias in any study of judicial decision making.
This problem ultimately cannot be resolved completely because, as I said in my last post, at some point, it becomes turtles all the way down. Of course, the extent to which selection bias matters depends on the questions the researcher is asking. If you are using court cases as your database to say something about disputes writ large, you’ve got a problem. But if you are studying court cases to say something about court cases, well, then I’m less concerned, even in appellate courts–assuming they have mandatory dockets.  The problem obviously becomes more pronounced when courts exercise discretion over their own dockets.

In my next blog, I’ll have something more to say abuot alleged inaccuracies or miscodings in the Spaeth or Songer Databases.

Challenges to Judicial Databases

I’m so pleased that Jeff and Andy invited me to blog for Voir Dire.

The first item I wanted to raise to readers involves the federal judicial databases that Harold Spaeth (Supreme Court Database) and Don Songer (U.S. Court of Appeals Database) developed for public use using funding from the National Science Foundation.  (There is also a terrific state court database compiled by Melinda Hall and Paul Brace as well.)

A number of potential problems have been discussed in recent manuscripts and publications concerning whether use of these databases yields biased or inaccurate results.  One line of criticism involves whether the databases miscode cases: most of these critiques have emerged from law school faculty and I’ll discuss those in a later post.

The other is a continuing concern from political scientists and law professors alike: selection bias.  If most cases are settled or not appealed from the trial court, what influence does that selection process have on the conclusions we reach from the analysis of judicial voting behavior?  We could react to this problem with a flip response that selection bias affects most everything we study and ultimately, “it’s turtles all the way down.”   From Stephen Hawking’s book:

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy that described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortiose.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

Yet–to use another animal metaphor–we can’t hide our heads in the sand on this issue.  At the Supreme Court, the problem may be particularly acute if the discretionary agenda-setting process biases the results of studies purporting to find predictable patterns in the justices’ voting behavior.  Where case selection is not random, if affects the inferences we can draw from our statistical models.  One excellent treatment of this subject was published recently in JELS by Jonathan Kastellec and Jeff Lax, available here.

They conduct simulations and conclude that the selection bias poses a moderate to severe problem in studies of judicial behavior at the court and propose several solutions.  Fortunately for us, Harold Spaeth is in the midst of coding lower court decisions from which cert was requested.  I’ll check with him on the status of that project and provide an update in my next post.