The Journal of Things We Like (Lots)
Select Page

Tag Archives: Librarianship and Legal Technology

From the Ivory Tower to the Judicial Trenches: Are We Bridging the Divide?

Most in legal academia would consider citation of their law review article in a judicial opinion an honor. However, most probably also remember Chief Justice Roberts’ 2011 comment that an article about “the influence of Immanuel Kant on evidentiary approaches in Eighteenth Century Bulgaria or something…isn’t of much help to the bar.” The Chief Justice’s comment may leave you wondering how often judicial opinions have cited law review articles and what factors might make your article into a rare unicorn. Mr. Detweiler answers these questions and more in May It Please the Court: A Longitudinal Study of Judicial Citation to Academic Legal Periodicals.

Mr. Detweiler has compiled a list of state and federal court citations to legal academic journals from 1945-2018 and mapped them as a proportion of all reported opinions and by total number annually. He tracks the ebb and flow of citations through the years and makes interesting observations about what may influence increases and decreases in citation frequency. But he doesn’t stop there. His research then compares citation frequency from 1970-2018 of articles in Harvard Law Review and Yale Law Journal with flagship journals from sample schools in each tier of the U.S. News rankings. The article also includes a scan of the history of academic law journals, the first citations of journals, and the explosive growth of journals starting in the 1970s.

The article begins with a brief history of student-edited law reviews and their relatively slow acceptance by the judiciary. Mr. Detweiler notes Chief Justice Taft’s complaint about his colleagues “‘undignified’ use of law review material in their dissents.” But change was already underway. The next Chief Justice, Chief Justice Hughes, labeled law reviews as the “fourth estate of the law.” Mr. Detweiler then moves on to examine all citations of academic law journals from 1945-2018 in reported state and federal cases. Graphs included in the article illustrate changes over time. The percentage of cases citing law reviews shows a rise from 1.8% in 1945 to almost 5% in the mid-1960s/1970s with a dip mid-decade of about 0.5%. Mr. Detweiler notes that the peak of 4.9% is a 172% increase in citing cases over the rate in 1945. After the peak in the mid-1970s, the percentage of opinions citing articles declines over the next two decades. Since the mid-1990s, the percentage has leveled out some, fluctuating between 1.5% and nearly 2%, reaching 1.8% in 2018. A similar graph models the growth in absolute numbers of opinions citing law review articles with a similar increase and then decline. Mr. Detweiler attributes a portion of the percentage decrease in the early 1980s to the number of reported opinions increasing more quickly than the number of citing cases.

Mr. Detweiler posits several possible causes for the decrease in the percentage of cases citing law reviews from its heyday in the mid-1960s/1970s to its current level. Two of the most compelling are technological advances and changes in the content of academic legal scholarship. Both Lexis and Westlaw launched in the mid-1970s leading to easier access to case law, which was also growing in breadth. Academic law reviews were incorporated more slowly into the legal research systems and didn’t have more expansive coverage until the mid-1990s. Judges and their clerks could easily access case law (especially binding precedent) directly instead of relying on scholarly works.

Mr. Detweiler also highlights a shift, beginning in the 1970s, at higher-ranked law schools away from more traditionally doctrinal scholarship toward interdisciplinary work and new areas of scholarship that were not as directly applicable to the everyday work of attorneys and judges. This point becomes important when we view differences in citation rates between flagship law journals at higher-ranked and lower-ranked law schools.

Part II of the article examines how the percentage of citations varies from elite law schools (represented by Harvard and Yale), top 14 schools, Tier I, Tier II, Tier III, and Tier IV schools. (Mr. Detweiler explains the selection of the exemplar schools in the methodology.) The data shows, unsurprisingly, a strong prestige factor in the law journals cited in cases. Harvard Law Review was the clear leader with a significantly higher percentage of citations than the next highest, Yale Law Journal. Although the prestige factor is still apparent, the rate of opinions that cited Harvard Law Review or Yale Law Journal has steadily declined from about 34% in 1970 to approximately 14% in 2018. Similarly, the percentage of opinions citing top 14 law schools fell from 1970 to 2018. During the same period, the percentage of opinions citing Tier 1 law journals stayed relatively stable. The rates of opinions citing Tier II and Tier III schools had more extreme variations from year to year, but the trend has been a gradual increase. Similarly, opinions citing Tier IV flagship law journals have seen a gradual increase over time while still the smallest percentage. The elite advantage is still present but is not as great as it once was.

Why has the gap narrowed? Mr. Detweiler points to some of the same factors highlighted in the decline of the percentage of reported opinions citing academic law journals. One of these is the rise of computer-assisted legal research (CALR) and the ease with which researchers can search and retrieve articles from a pantheon of academic law journals, not just the elite journals. A related point is the explosion in the number of academic law journals. Mr. Detweiler points out that 132 journals were indexed by the Current Index to Legal Periodicals in 1970, but today Lexis and Westlaw have approximately 1000 titles in their law journal databases. He hypothesizes that the increase in the number of journals is diluting the percentage of citing cases that any one journal is capturing.

While discussing judicial citation of academic legal journals, Mr. Detweiler contextualizes changes in citation patterns within changes in the legal academy and the court system. He explains in detail his well-reasoned methodology for each stage of his research, including documenting Lexis search strings longer than most of us have ever contemplated. His article is an interesting foray into academic legal scholarship and its influence, or lack of influence, over judicial precedent.

Author’s Note: Mr. Detweiler provides supplemental tables along with the article. Available tables are 1) Citations to all law reviews ; 2) Top 14 Law Reviews; 3) Citations to Tier I and Tier II law reviews; and 4) Citations to Tier III and Tier IV law reviews.

Cite as: Kristina Niedringhaus, From the Ivory Tower to the Judicial Trenches: Are We Bridging the Divide?, JOTWELL (April 6, 2021) (reviewing Brian T. Detweiler, May It Please the Court: A Longitudinal Study of Judicial Citation to Academic Legal Periodicals, 39 Legal Ref. Servs. Q. 87 (2020)), https://lex.jotwell.com/from-the-ivory-tower-to-the-judicial-trenches-are-we-bridging-the-divide/.

Is it a “Good” Case? Can You Rely on BCite, KeyCite, and Shepard’s to Tell You?

Every law student is told repeatedly to check that the cases they are relying on are still “good” law. They may even be told that not using a citator such as Shepard’s, KeyCite, or BCite could be malpractice and multiple ethics cases would support that claim. But how reliable are the results returned by these systems?

Paul Hellyer has published the surprising results of an important study investigating this question. Hellyer looked at 357 citing relationships that one or more of these three citators labeled as negative. “Out of these, all three citators agree that there was negative treatment only 53 times. This means that in 85% of these citing relationships, the three citators do not agree on whether there was negative treatment.” (P. 464.) Some of the differentiation between systems could be attributed to one system incorrectly marking a relationship as negative when it is not. This might be considered a less egregious mistake if one presumes that the researcher would review the flagged case and find no negative treatment, although it is a costly mistake in a field where time matters. However, Hellyer accounts for the false positive (or negative, in this case) problem and the results of his study are distressing.

We are told that the citators are reliable. I, along with numerous law professors and judges, have told students and attorneys that failure to use a citator could lead to anything from a judicial tongue lashing to disciplinary action to malpractice charges. As Hellyer points out (P. 450), the marketing for citators assures us that the systems produce reliable results. For example, KeyCite is marketed as “the industry’s most complete and accurate citator” and that you can be “confident you’re relying on valid law.” Similarly, the Shepard’s product page proclaims, “Is it good law? Shepardize® and be sure.” Bloomberg BNA is less boastful in its promotion of BCite stating, “Easy to use indicators…allow you to immediately (emphasis added) see how other cases have treated your case.”

Let’s look at some more data from Hellyer’s study, which he believes is “the largest statistical comparison study of citator performance for case validation” and the first to include BCite. (P. 450.) In addition to just looking at how the citators labeled the relationships, Hellyer assesses the case opinions to determine the nature of the citing relationship and whether it was correctly labeled by the citator. He differentiates between negative relationships that were not identified in any way and those that misidentified the relationship. An example of this would be if a case was in fact overturned but the citator labeled it as something else, such as “distinguished by.” When Hellyer examined whether the citators agreed on the subset of negative treatment, all three systems agreed on about only 11% of references.

Hellyer’s article is an important read for anyone who relies on a citator for case validation or, determining whether a case is still “good” law. The results are fascinating and his methodology is thorough and detailed. Before delving into his findings, Hellyer reviews previous studies and explains his process in detail. His dataset is available upon request. The article has additional value because Hellyer shared his results with the three vendors prior to publication and describes and responds to some of their criticisms in his article, allowing the reader to make their own assessment of the critique.

Even more interesting than the broader statistics, are Hellyer’s details of specific errors. He acknowledges that omission errors, as opposed to misidentification errors, were unpublished cases that might present less of a problem for attorneys. However, Hellyer goes on to examine the misidentification errors and concludes that all three citators exhibit the greatest issues not in identifying the cases but in the editorial analysis of what the citing relationship means. For example, in Hellyer’s dataset there were four cases later overruled by the United States Supreme Court. All three citators misidentified at least one citing relationship and one of them misidentified three of the four cases as something other than being overruled. Hellyer’s examination of these cases revealed how these misidentification errors can filter through to other citing relationships and create further errors. (Pp. 467-471.)

Analysis of citing relationships, and whether those relationships are positive and negative, is essential to the legal system and reliance on “good” law, or case validation, is the critical first step. Hellyer states that the results of his analysis mean “that when you citate a case that has negative treatment, the results you get depend mainly on which citator you happen to be using.” (P. 465.) This is a stunning assessment of a vital resource that is so widely and heavily relied upon by the legal community.

Cite as: Kristina Niedringhaus, Is it a “Good” Case? Can You Rely on BCite, KeyCite, and Shepard’s to Tell You?, JOTWELL (April 22, 2019) (reviewing Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy, 110 Law Libr. J. 449 (2018)), https://lex.jotwell.com/is-it-a-good-case-can-you-rely-on-bcite-keycite-and-shepards-to-tell-you/.

What Don’t You Know and How Will You Learn It?

Susan Nevelow Mart, The Algorithm as a Human Artifact: Implications for Legal [Re]Search, 109 Law Libr. J. 387 (2017).

For those of us who are not engineers or programmers, magical results appear when we run searches in legal databases. However, we have little understanding of the machinations behind the ever-present e-wall. What kind of confidence can we have when the underlying structure of legal databases are hardwired with human biases? We must ask ourselves the question posed to then-Senator Obama and Senator McCain at a Town Hall Debate in 2008, “What don’t you know and how will you learn it?”

When I teach legal research, my students compare the same searches in different databases. One goal is to demonstrate that there are different results. But a more nuanced goal is to examine the results closely enough to provide insights into which databases might be more useful for updating, for case searching, for browsing statutes, and other research tasks. Susan Nevelow Mart’s study will elevate these discussions because of her focus on human-engineered algorithms and the inherent biases in the databases used for legal research. This study will also guide researchers to think more about search strategy and will help set more realistic expectations about search results.

Mart studied the impact of human judgment and bias at every step of the database search process. Her study explains how bias is hardwired into the human-engineered algorithm of each database. Add additional layers of human judgment and bias to the choice of database, to the date and time of the search, to the search terms, to the vendor’s classification scheme, and to the fact that searchers typically only browse the first 10 sometimes-relevant results. Mart introduces us to the concept of algorithmic accountability or “the term for disclosing prioritization, classification, association, and filtering.” Mart contends that algorithmic accountability, or understanding a bit more about the secret sauce in the inputs, will help researchers produce more accurate search results.

Mart’s research sought to test hypotheses about search algorithms by examining the results of the same searches in the same jurisdiction across six databases: Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw. When examining the relevance of the top 10 results, it is unsurprising that Lexis Advance and Westlaw lead in the relevancy rankings because they have the longest standing in the market. However, it is surprising that the top 10 results for those two vendors were relevant only 57% and 67% of the time, respectively.

Mart found that each of the six databases average 40% unique cases in the top 10 results. Mart also explores how many of the unique results are relevant in each database’s results. Again, it is unsurprising that Westlaw (at 33%) and Lexis Advance (at about 20%) lead in these two categories. It is surprising, however, that there are so many relevant cases that are unique results when the same search was performed in each database. And because we don’t know what is in the secret sauce, it is difficult to improve these outcomes.

There are a number of takeaways from Mart’s study. First, algorithmic variations lead to variations in the unique, and in the relevant, results returned from each database. Second, database vendors want us to have confidence in their products but it is still necessary to run the same search in more than one database to improve the chances of yielding the most comprehensive, relevant results. Third, while some of the newer legal databases yield less unique and less relevant results, they can bring advantages depending on the research topic, the time period, and other contextual details.

This well-researched and well-written article is required reading for every attorney who performs research on behalf of a client and for every professor who teaches legal research or uses legal databases. Because we often don’t know what we don’t know, Mart’s work pushes us to think more deeply about our search products and processes. Mart’s results provide an opportunity to narrow the gap in knowledge by learning a bit about what we don’t know. Learning from this scholarly yet accessible article brings the reader closer to understanding how to derive the optimal output even without knowing the ingredients in the secret sauce.

Cite as: Elizabeth Adelman, What Don’t You Know and How Will You Learn It?, JOTWELL (February 19, 2018) (reviewing Susan Nevelow Mart, The Algorithm as a Human Artifact: Implications for Legal [Re]Search, 109 Law Libr. J. 387 (2017)), https://lex.jotwell.com/dont-know-will-learn/.

A Call, and Roadmap, to Create Legal Research Classes that Meet the Experiential Standard

Alyson M. Drake, The Need for Experiential Legal Research Education, 108 Law Libr. J. 511 (2016).

Experiential learning is currently one of the buzz words of legal education. Recent changes to the ABA Standards and Rules of Procedure for Approval of Law Schools have focused greater attention on learning outcomes and assessment and increasing opportunities for learning and practicing skills that students will use as attorneys. In fact, ABA Standard 303(a)(3) requires a minimum of 6 credit hours of experiential course work.

Traditionally, experiential learning was widely thought to be the domain of law school clinics and externships, or field placements. However, the increased credit hour requirement for experiential learning has caused law schools to review their curriculum and determine whether sufficient experiential learning opportunities exist to meet the minimum requirement. Accordingly, there is a push to design new courses, or redesign existing courses, to meet a third type of experiential learning termed simulation courses, as described in ABA Standard 304. In order to qualify as a simulation course under the standard, a course should provide an experience “reasonably similar” to client representation although the student is not working with a real client.

Professor Alyson M. Drake’s article calls for the creation or retooling of stand-alone research classes that will meet the requirements to be designated as experiential classes. An increase in the number of research classes categorized as experiential will provide two benefits. First, and most importantly, it can serve to provide additional legal research instruction beyond the first year of law school. It will also support the mission of law schools to expand course offerings that meet the experiential standard.

Although classes in legal research have traditionally been sparse and under-valued in law schools, this approach does not match the necessity for proficiency in legal research needed as a practicing attorney, or the amount of an attorney’s time that is devoted to legal research. Professor Drake notes that newer attorneys spend approximately 35% of their time on legal research. More experienced attorneys spend approximately 18% of their time doing research. In contrast, Professor Drake notes that hiring attorneys often assess new attorney research skills as needing improvement. Providing more legal research instruction, in a format that closely mimics the work of a practicing attorney, will produce students who are better prepared to transition to practice.

Professor Drake’s article provides a very useful overview of the history of this shift in the ABA Standards and legal education in general. Similarly, the article also provides a breakdown of the Standards and how they can be interpreted in regards to experiential education and, in particular, simulation courses. This background and analysis of the standards is quite useful for those seeking to understand the context of the shift toward increased experiential learning as well as those seeking to create experiential learning opportunities in legal research courses.

Finally, Professor Drake discusses several current legal research teaching methods and how they might be retooled to satisfy the requirements of Standard 304. She breaks down the components of different pedagogical structures, highlighting which components are likely to already meet the requirements, which aspects might not, and recommending ways in which those components might be restructured to meet the standard.

Professor Drake begins with the “traditional legal research course,” described as “those where the lecture takes place during class time.” (P. 529.) Some classroom practice is usually included, but the balance between lecture and practicing skills can vary significantly. A traditional class of this sort may be the most challenging to convert to an experiential model. However, Professor Drake offers several changes to a traditional class that could support its meeting the ABA requirements for an experiential class. These include ensuring the balance between lecture and practice weighs most heavily toward practice, using assignments that closely approximate problems likely to occur in practice, and limiting class size so that the professor can provide “direct supervision.”

Professor Drake goes on to discuss key components of flipped, online, and specialized legal research courses and outlines several suggestions for how these courses can be retooled to meet the experiential requirements. Her article is a call to create experiential legal research courses, but it goes farther by providing a roadmap that can be used to design new classes or restructure existing classes. Legal research is a critical lawyering skill; however, it is also an area where hiring attorneys believe new attorney skills are lacking. This gap between the necessity of efficient and effective research and ability at law school graduation creates an opportunity for the growth of legal research courses in the curriculum.

Cite as: Kristina Niedringhaus, A Call, and Roadmap, to Create Legal Research Classes that Meet the Experiential Standard, JOTWELL (May 2, 2017) (reviewing Alyson M. Drake, The Need for Experiential Legal Research Education, 108 Law Libr. J. 511 (2016)), https://lex.jotwell.com/a-call-and-roadmap-to-create-legal-research-classes-that-meet-the-experiential-standard/.

Toward a Universal Understanding of the Value of Legal Research Education

Learning the substantive law has always been the foundation of a legal education. As job prospects for attorneys tightened, a focus on practitioner skills began trending in legal education. There is an expectation that law schools will produce practice-ready attorneys. Despite this expectation, why are Johnny and Jane unable to research?

Professor Caroline L. Osborne’s research findings have confirmed what many legal educators surmise about the state of legal research education. Her findings demonstrate that legal research education is undervalued in law schools. “For those involved in legal education, the goal is to provide students with the tools they need to succeed . . . . ” (P. 407.) In a carpenter’s arena, the value of the hammer is universally understood. The value of legal research as an essential tool of the legal trade, on the other hand, is not well understood in legal education. This lack of understanding persists, despite the MacCrate report and its ilk, codified ethical obligations of attorneys, and promulgated research competency standards. With this in mind, Professor Osborne presents each contributing factor to the devaluation of legal research education so that the reader is equipped to ponder solutions.

The MacCrate Report was published in 1992, and served as one of the first comprehensive assessments of necessary skills for attorneys and of the legal profession. It declares that legal research skills are fundamental: “It can hardly be doubted that the ability to do legal research is one of the skills that any competent legal practitioner must possess.” (P. 163.) Similarly, Best Practices for Legal Education acknowledges legal research as “essential.” (P. 58.)

The ethical obligations of attorneys further obligate Johnny and Jane. Model Rules of Professional Conduct, Rule 1.1, which has been adopted in whole or in part by all fifty states, holds the lawyer to a certain level of competence:

A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.

Comment 2 to Model Rule 1.1 points out that competent representation requires a lawyer to possess “important legal skills” that are typical in all legal problems. Comment 8 further articulates the need to maintain competence, in part, by requiring technology competency, including those technology skills which are essential for performing research:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, . . .

Technology is a major factor in the devaluation of legal research education. It has enabled the creation of and easy access to massive amounts of information. It has also transformed how we perform research. While the ubiquity of information has made research appear easier, it has actually become a more sophisticated task. To further complicate the picture, Osborne points out that today’s students seem to lack the ability to evaluate information and to think critically.

The ethical standards outlined above are bolstered by Legal Research Competencies and Standards for Law Student Information Literacy, developed and approved by the American Association of Law Libraries in 2012. These standards were promulgated to inform best practices in legal education curricular design and to provide a baseline competency for attorneys in practice settings.

Notwithstanding the existence of these industry standards, Professor Osborne discusses contributing factors for the devaluation of legal research education in law school curricula. Her survey results show, for example, that only 16% of responding institutions have a stand-alone research classes as opposed to being integrated into the legal writing curriculum. She also points out that law school writing programs have been strengthened while the emphasis on research has lessened. Combined, these factors have had an unfortunate impact on legal research education. “[T]he cost of graduating fluent writers should not be the legal research curriculum.” (P. 404.)

Despite what the industry demands, Osborne opines that the current state of legal education—including pass/fail grading, fewer credits awarded compared with other 1L courses, integration with and diminished presence in the writing curriculum—sends the message to law students that legal research education is unimportant. Her thesis is backed up by the findings of a recent BARBRI State of the Legal Field survey: “Faculty placed very little importance on research, with just 4 percent citing it as the most important skill for recent law school graduates . . . .” In our role as legal educators, “we fail to signal the importance of legal research in the practice of law.” (P. 409.)

In this article, Professor Osborne conveys a very important message to legal educators: we should consider ourselves on notice that our actions and our curricula demonstrate a decreased importance of legal research education to our students. As a result, Johnny and Jane are leaving law school without this fundamental tool despite the expectation that they graduate with both the substantive knowledge and the skills to be practice ready and to maintain or exceed industry standards.

Cite as: Elizabeth Adelman, Toward a Universal Understanding of the Value of Legal Research Education, JOTWELL (February 7, 2017) (reviewing Caroline L. Osborne, The State of Legal Research Education: A Survey of First-Year Legal Research Programs, or Why Johnny and Jane Cannot Research”, 108 Law Libr. J. 403 (2016)), https://lex.jotwell.com/toward-a-universal-understanding-of-the-value-of-legal-research-education/.