The Journal of Things We Like (Lots)
Select Page

Tag Archives: Librarianship and Legal Technology

Access to Justice Requires Usability, Not Just Open Access

Any law librarian who works with the public or teaches no- or low-cost legal research, or any attorney or law student using free resources to conduct research understands the wide gap in usability between fee-paid databases and most free, open-access legal resources. Focusing on statutory code research, Professor Darvil’s article, Increasing Access to Justice by Improving Usability of Statutory Code Websites, examines the need not just for access to statutory codes, but providing the information in a way that allows the user to find the law they need. Through the lens of website usability standards, Professor Darvil assesses state code websites and provides recommendations for how those websites can improve usability. Many states have created “Access to Justice” initiatives and commissions aiming to improve citizen access and experience with the legal system. Professor Darvil’s recommendations provide excellent guidance for those interested in improving the research experience and access to the law for everyone, including those without access to fee-based databases such as Lexis or Westlaw.

Inevitably, my legal research students are, at some point in the semester, treated to my soapbox speech about how equitable access to the legal system rests on the ability of any citizen, regardless of means, to access the law they are obligated to live under. If states care about access to justice issues, logically they must care about how they provide access to the law. Professor Darvil’s article provides an excellent discussion of the access to justice issues endemic in a legal system in which, particularly on the civil side, many litigants are self-represented and how those litigants are impacted, frequently negatively, by their inability to find the law.

The unique value of Professor Darvil’s article, however, is her assessment of the usability of state statutory websites and recommendations for improvement. Her evaluation of the state websites uses a standards-based approach. Nearly half the states and the District of Columbia have adopted the Uniform Electronic Legal Materials Act (UELMA), which seems a reasonable place to look for applicable standards. Unfortunately, as Professor Darvil notes, UELMA doesn’t address the usability of electronic legal materials. Lacking usability standards in UELMA, Professor Darvil turns instead to standards developed in the disciplines of human computer interaction (HCI) and user experience (UX). Experts in these fields explore how easily people can navigate a particular website interface and find what they are looking for. There are several sets of standards for assessing website usability and Professor Darvil uses the following standards from the U.S. Department of Health and Human Services (HHS).

Search Guidelines:

  • Ensuring usable search results
  • Designing search engines to search entire site or clearly communicate what part of the site is searched
  • Making upper and lowercase search terms equivalent
  • Designing search around user’s terms

Navigation Guidelines:

  • Providing navigational options
  • Differentiating and grouping navigation elements
  • Offering a clickable list of contents
  • Providing feedback on user’s location

(P. 133.)

Professor Darvil evaluates the statutory code websites of all 50 states and the District of Columbia hrough an exploration of the features available and some relatively simple searches for state laws and comparing the results to a 50-state survey. She then explains the results using multiple illustrative screenshots from a variety of state websites as well as appendices outlining the results for each guideline above in every jurisdiction.

Finally, Professor Darvil ends with recommendations for states seeking to improve their statutory code websites. These recommendations range from fairly basic to more complex. Suggestions for improving navigation include recommendations such as providing a table of contents, clickable search trails, and navigational buttons. Suggestions for improving searching are more complex and include items such as using a controlled vocabulary, providing context surrounding the search terms by indicating where the researcher is in the code, relevancy rankings, and instructions on how to best search. Professor Darvil points out that many of these recommendations are familiar to librarians and suggests that states should use librarians to help implement many of these suggestions.

This article provides a logical and thorough assessment of state statutory code websites, explains why usability matters for access to justice, and suggests ways that governments can improve usability. As she concludes, “Governments that are based on the rule of law have special duties to their citizens: transparency, accountability, and reasonable access to their laws. […] When state governments do so, they promote access to justice and the rule of law.” (P. 153.)

Cite as: Kristina Niedringhaus, Access to Justice Requires Usability, Not Just Open Access, JOTWELL (August 11, 2023) (reviewing Kathleen Darvil, Increasing Access to Justice by Improving Usability of Statutory Code Websites, 115 Law Lib. J. 123 (2023)), https://lex.jotwell.com/access-to-justice-requires-usability-not-just-open-access/.

Checking Annotations in both USCS and USCA: Necessary or Redundant?

Law students and attorneys often wonder if it matters whether they use United States Code Service (USCS), a Matthew Bender publication also available on Lexis+, or United States Code Annotated (USCA), a Thomson Reuters publication also available on Westlaw Edge. In 1L legal research classes, I often field the question about what the differences are between the publications. “They are both the US Code, right?” is a common refrain. The traditional lore, passed on to law students, was that USCA strove to provide an annotation for every relevant case while USCS strove to provide annotations to the “best” cases. Accordingly, USCA was said to contain a greater number of annotations and USCS was more selective. I recall being taught this in law school. However, like much folklore, the foundations for this assertion are becoming lost with time and it is unclear whether this represents the current state of the two annotated codes. The product page for the print edition of USCA states that the set has “comprehensive case annotations.” Similarly, the product page for the print version of the USCS states that it is “the most comprehensive” set. We are left to determine for ourselves the meaning of “comprehensive.” We will talk more about this later, but it is important to note that USCS case annotations include administrative decisions while USCA case annotations do not.

Ms. Marcum’s research explores whether there is a significant difference between the annotations found in USCA and USCS. Does it matter which annotated code the researcher uses? Should a thorough researcher use both? Most people would expect some unique case annotations in each annotated code with a fair amount of overlap between the two sets. The surprising results were that out of 9164 case annotations for 23 statutes, 6748 of the annotations were unique to either USCS or USCA. Of the 9164 case annotations, 73.6% of them were unique and only listed in one of the annotated codes. Most researchers will be shocked by the small amount of overlap between the two publications. One could anticipate that this percentage would be statistically significant, and Ms. Marcum confirms this is true using a Wilcoxon T test.

Going deeper into the numbers, of the 6748 unique case annotations, 3852 were unique to USCA and 2896 were unique to USCS. Of the case annotations in USCA 76% were unique while 70.5% of the case annotations in USCS were unique. Back to those administrative decisions that are included in USCS but not in USCA. Those administrative decisions have been included in the data. Ms. Marcum explains her research methodology in detail and included the administrative decisions in the data “because they are publisher-neutral, government information that both codes could have included if they so desired.” (P. 210.)

Why does this matter? It is an additional data point available to help a researcher decide whether to use USCA, USCS, or both. It also adds to the information available to information professionals making decisions about whether to purchase one, or both, of the annotated codes. Neither the print sets, nor their related electronic research systems are inexpensive. There is a strikingly limited amount of empirical research, either quantitative or qualitative, studying legal research tools. Ms. Marcum’s research is an important addition to the knowledge we have about the tools lawyers, law students, and law librarians use every day. For example, there are only two other comparisons of case annotations available. A Comparison of Case Law Results between Bloomberg Law’s ‘Smart Code’ Automated Annotated Statutes and Traditional Curated Annotated Codes, is an unpublished draft paper by Jason Zarin from 2017 available at SSRN (Social Science Research Network), https://ssrn.com/abstract=2998805 or http://dx.doi.org/10.2139/ssrn.2998805. The other is four decades old, Jeanne Benioff, A Comparison of Annotated U.S. Codes, 2 Legal Reference Services Q. 37 (1982). In fact, very few comparisons of any aspects of major legal research products exists. Some notable exceptions are works by Susan Nevelow Mart such as The Algorithm as a Human Artifact: Implications for Legal [Re]Search, The Case for Curation: The Relevance of Digest and Citator Results in Westlaw and Lexis, and The Relevance of Results Generated by Human Indexing and Computer Algorightms: A Study of West’s Headnotes and Key Numbers and Lexis’s Headnotes and Topics (102 Law Libr. J. 221 (2010)). Also of note is research by Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy which I reviewed on Jotwell. Given the cost of major legal research databases, more evaluative comparisons of their features and tools would be beneficial to the legal profession.

Research like Ms. Marcum’s provides support for evidence-based decision making by researchers and information professionals when making decisions about what resources to purchase and use. It is imperative that more scholars undertake empirical research analyzing and comparing legal research tools relied upon by the legal profession.

Cite as: Kristina Niedringhaus, Checking Annotations in both USCS and USCA: Necessary or Redundant?, JOTWELL (June 3, 2022) (reviewing Emily Marcum, Comparing the United States Code Annotated and the United States Code Service Using Inferential Statistics: Are Their Annotations Equal? 113 Law Lib. J. 207 (2021)), https://lex.jotwell.com/checking-annotations-in-both-uscs-and-usca-necessary-or-redundant/.

From the Ivory Tower to the Judicial Trenches: Are We Bridging the Divide?

Most in legal academia would consider citation of their law review article in a judicial opinion an honor. However, most probably also remember Chief Justice Roberts’ 2011 comment that an article about “the influence of Immanuel Kant on evidentiary approaches in Eighteenth Century Bulgaria or something…isn’t of much help to the bar.” The Chief Justice’s comment may leave you wondering how often judicial opinions have cited law review articles and what factors might make your article into a rare unicorn. Mr. Detweiler answers these questions and more in May It Please the Court: A Longitudinal Study of Judicial Citation to Academic Legal Periodicals.

Mr. Detweiler has compiled a list of state and federal court citations to legal academic journals from 1945-2018 and mapped them as a proportion of all reported opinions and by total number annually. He tracks the ebb and flow of citations through the years and makes interesting observations about what may influence increases and decreases in citation frequency. But he doesn’t stop there. His research then compares citation frequency from 1970-2018 of articles in Harvard Law Review and Yale Law Journal with flagship journals from sample schools in each tier of the U.S. News rankings. The article also includes a scan of the history of academic law journals, the first citations of journals, and the explosive growth of journals starting in the 1970s.

The article begins with a brief history of student-edited law reviews and their relatively slow acceptance by the judiciary. Mr. Detweiler notes Chief Justice Taft’s complaint about his colleagues “‘undignified’ use of law review material in their dissents.” But change was already underway. The next Chief Justice, Chief Justice Hughes, labeled law reviews as the “fourth estate of the law.” Mr. Detweiler then moves on to examine all citations of academic law journals from 1945-2018 in reported state and federal cases. Graphs included in the article illustrate changes over time. The percentage of cases citing law reviews shows a rise from 1.8% in 1945 to almost 5% in the mid-1960s/1970s with a dip mid-decade of about 0.5%. Mr. Detweiler notes that the peak of 4.9% is a 172% increase in citing cases over the rate in 1945. After the peak in the mid-1970s, the percentage of opinions citing articles declines over the next two decades. Since the mid-1990s, the percentage has leveled out some, fluctuating between 1.5% and nearly 2%, reaching 1.8% in 2018. A similar graph models the growth in absolute numbers of opinions citing law review articles with a similar increase and then decline. Mr. Detweiler attributes a portion of the percentage decrease in the early 1980s to the number of reported opinions increasing more quickly than the number of citing cases.

Mr. Detweiler posits several possible causes for the decrease in the percentage of cases citing law reviews from its heyday in the mid-1960s/1970s to its current level. Two of the most compelling are technological advances and changes in the content of academic legal scholarship. Both Lexis and Westlaw launched in the mid-1970s leading to easier access to case law, which was also growing in breadth. Academic law reviews were incorporated more slowly into the legal research systems and didn’t have more expansive coverage until the mid-1990s. Judges and their clerks could easily access case law (especially binding precedent) directly instead of relying on scholarly works.

Mr. Detweiler also highlights a shift, beginning in the 1970s, at higher-ranked law schools away from more traditionally doctrinal scholarship toward interdisciplinary work and new areas of scholarship that were not as directly applicable to the everyday work of attorneys and judges. This point becomes important when we view differences in citation rates between flagship law journals at higher-ranked and lower-ranked law schools.

Part II of the article examines how the percentage of citations varies from elite law schools (represented by Harvard and Yale), top 14 schools, Tier I, Tier II, Tier III, and Tier IV schools. (Mr. Detweiler explains the selection of the exemplar schools in the methodology.) The data shows, unsurprisingly, a strong prestige factor in the law journals cited in cases. Harvard Law Review was the clear leader with a significantly higher percentage of citations than the next highest, Yale Law Journal. Although the prestige factor is still apparent, the rate of opinions that cited Harvard Law Review or Yale Law Journal has steadily declined from about 34% in 1970 to approximately 14% in 2018. Similarly, the percentage of opinions citing top 14 law schools fell from 1970 to 2018. During the same period, the percentage of opinions citing Tier 1 law journals stayed relatively stable. The rates of opinions citing Tier II and Tier III schools had more extreme variations from year to year, but the trend has been a gradual increase. Similarly, opinions citing Tier IV flagship law journals have seen a gradual increase over time while still the smallest percentage. The elite advantage is still present but is not as great as it once was.

Why has the gap narrowed? Mr. Detweiler points to some of the same factors highlighted in the decline of the percentage of reported opinions citing academic law journals. One of these is the rise of computer-assisted legal research (CALR) and the ease with which researchers can search and retrieve articles from a pantheon of academic law journals, not just the elite journals. A related point is the explosion in the number of academic law journals. Mr. Detweiler points out that 132 journals were indexed by the Current Index to Legal Periodicals in 1970, but today Lexis and Westlaw have approximately 1000 titles in their law journal databases. He hypothesizes that the increase in the number of journals is diluting the percentage of citing cases that any one journal is capturing.

While discussing judicial citation of academic legal journals, Mr. Detweiler contextualizes changes in citation patterns within changes in the legal academy and the court system. He explains in detail his well-reasoned methodology for each stage of his research, including documenting Lexis search strings longer than most of us have ever contemplated. His article is an interesting foray into academic legal scholarship and its influence, or lack of influence, over judicial precedent.

Author’s Note: Mr. Detweiler provides supplemental tables along with the article. Available tables are 1) Citations to all law reviews ; 2) Top 14 Law Reviews; 3) Citations to Tier I and Tier II law reviews; and 4) Citations to Tier III and Tier IV law reviews.

Cite as: Kristina Niedringhaus, From the Ivory Tower to the Judicial Trenches: Are We Bridging the Divide?, JOTWELL (April 6, 2021) (reviewing Brian T. Detweiler, May It Please the Court: A Longitudinal Study of Judicial Citation to Academic Legal Periodicals, 39 Legal Ref. Servs. Q. 87 (2020)), https://lex.jotwell.com/from-the-ivory-tower-to-the-judicial-trenches-are-we-bridging-the-divide/.

Is it a “Good” Case? Can You Rely on BCite, KeyCite, and Shepard’s to Tell You?

Every law student is told repeatedly to check that the cases they are relying on are still “good” law. They may even be told that not using a citator such as Shepard’s, KeyCite, or BCite could be malpractice and multiple ethics cases would support that claim. But how reliable are the results returned by these systems?

Paul Hellyer has published the surprising results of an important study investigating this question. Hellyer looked at 357 citing relationships that one or more of these three citators labeled as negative. “Out of these, all three citators agree that there was negative treatment only 53 times. This means that in 85% of these citing relationships, the three citators do not agree on whether there was negative treatment.” (P. 464.) Some of the differentiation between systems could be attributed to one system incorrectly marking a relationship as negative when it is not. This might be considered a less egregious mistake if one presumes that the researcher would review the flagged case and find no negative treatment, although it is a costly mistake in a field where time matters. However, Hellyer accounts for the false positive (or negative, in this case) problem and the results of his study are distressing.

We are told that the citators are reliable. I, along with numerous law professors and judges, have told students and attorneys that failure to use a citator could lead to anything from a judicial tongue lashing to disciplinary action to malpractice charges. As Hellyer points out (P. 450), the marketing for citators assures us that the systems produce reliable results. For example, KeyCite is marketed as “the industry’s most complete and accurate citator” and that you can be “confident you’re relying on valid law.” Similarly, the Shepard’s product page proclaims, “Is it good law? Shepardize® and be sure.” Bloomberg BNA is less boastful in its promotion of BCite stating, “Easy to use indicators…allow you to immediately (emphasis added) see how other cases have treated your case.”

Let’s look at some more data from Hellyer’s study, which he believes is “the largest statistical comparison study of citator performance for case validation” and the first to include BCite. (P. 450.) In addition to just looking at how the citators labeled the relationships, Hellyer assesses the case opinions to determine the nature of the citing relationship and whether it was correctly labeled by the citator. He differentiates between negative relationships that were not identified in any way and those that misidentified the relationship. An example of this would be if a case was in fact overturned but the citator labeled it as something else, such as “distinguished by.” When Hellyer examined whether the citators agreed on the subset of negative treatment, all three systems agreed on about only 11% of references.

Hellyer’s article is an important read for anyone who relies on a citator for case validation or, determining whether a case is still “good” law. The results are fascinating and his methodology is thorough and detailed. Before delving into his findings, Hellyer reviews previous studies and explains his process in detail. His dataset is available upon request. The article has additional value because Hellyer shared his results with the three vendors prior to publication and describes and responds to some of their criticisms in his article, allowing the reader to make their own assessment of the critique.

Even more interesting than the broader statistics, are Hellyer’s details of specific errors. He acknowledges that omission errors, as opposed to misidentification errors, were unpublished cases that might present less of a problem for attorneys. However, Hellyer goes on to examine the misidentification errors and concludes that all three citators exhibit the greatest issues not in identifying the cases but in the editorial analysis of what the citing relationship means. For example, in Hellyer’s dataset there were four cases later overruled by the United States Supreme Court. All three citators misidentified at least one citing relationship and one of them misidentified three of the four cases as something other than being overruled. Hellyer’s examination of these cases revealed how these misidentification errors can filter through to other citing relationships and create further errors. (Pp. 467-471.)

Analysis of citing relationships, and whether those relationships are positive and negative, is essential to the legal system and reliance on “good” law, or case validation, is the critical first step. Hellyer states that the results of his analysis mean “that when you citate a case that has negative treatment, the results you get depend mainly on which citator you happen to be using.” (P. 465.) This is a stunning assessment of a vital resource that is so widely and heavily relied upon by the legal community.

Cite as: Kristina Niedringhaus, Is it a “Good” Case? Can You Rely on BCite, KeyCite, and Shepard’s to Tell You?, JOTWELL (April 22, 2019) (reviewing Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy, 110 Law Libr. J. 449 (2018)), https://lex.jotwell.com/is-it-a-good-case-can-you-rely-on-bcite-keycite-and-shepards-to-tell-you/.

What Don’t You Know and How Will You Learn It?

Susan Nevelow Mart, The Algorithm as a Human Artifact: Implications for Legal [Re]Search, 109 Law Libr. J. 387 (2017).

For those of us who are not engineers or programmers, magical results appear when we run searches in legal databases. However, we have little understanding of the machinations behind the ever-present e-wall. What kind of confidence can we have when the underlying structure of legal databases are hardwired with human biases? We must ask ourselves the question posed to then-Senator Obama and Senator McCain at a Town Hall Debate in 2008, “What don’t you know and how will you learn it?”

When I teach legal research, my students compare the same searches in different databases. One goal is to demonstrate that there are different results. But a more nuanced goal is to examine the results closely enough to provide insights into which databases might be more useful for updating, for case searching, for browsing statutes, and other research tasks. Susan Nevelow Mart’s study will elevate these discussions because of her focus on human-engineered algorithms and the inherent biases in the databases used for legal research. This study will also guide researchers to think more about search strategy and will help set more realistic expectations about search results.

Mart studied the impact of human judgment and bias at every step of the database search process. Her study explains how bias is hardwired into the human-engineered algorithm of each database. Add additional layers of human judgment and bias to the choice of database, to the date and time of the search, to the search terms, to the vendor’s classification scheme, and to the fact that searchers typically only browse the first 10 sometimes-relevant results. Mart introduces us to the concept of algorithmic accountability or “the term for disclosing prioritization, classification, association, and filtering.” Mart contends that algorithmic accountability, or understanding a bit more about the secret sauce in the inputs, will help researchers produce more accurate search results.

Mart’s research sought to test hypotheses about search algorithms by examining the results of the same searches in the same jurisdiction across six databases: Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw. When examining the relevance of the top 10 results, it is unsurprising that Lexis Advance and Westlaw lead in the relevancy rankings because they have the longest standing in the market. However, it is surprising that the top 10 results for those two vendors were relevant only 57% and 67% of the time, respectively.

Mart found that each of the six databases average 40% unique cases in the top 10 results. Mart also explores how many of the unique results are relevant in each database’s results. Again, it is unsurprising that Westlaw (at 33%) and Lexis Advance (at about 20%) lead in these two categories. It is surprising, however, that there are so many relevant cases that are unique results when the same search was performed in each database. And because we don’t know what is in the secret sauce, it is difficult to improve these outcomes.

There are a number of takeaways from Mart’s study. First, algorithmic variations lead to variations in the unique, and in the relevant, results returned from each database. Second, database vendors want us to have confidence in their products but it is still necessary to run the same search in more than one database to improve the chances of yielding the most comprehensive, relevant results. Third, while some of the newer legal databases yield less unique and less relevant results, they can bring advantages depending on the research topic, the time period, and other contextual details.

This well-researched and well-written article is required reading for every attorney who performs research on behalf of a client and for every professor who teaches legal research or uses legal databases. Because we often don’t know what we don’t know, Mart’s work pushes us to think more deeply about our search products and processes. Mart’s results provide an opportunity to narrow the gap in knowledge by learning a bit about what we don’t know. Learning from this scholarly yet accessible article brings the reader closer to understanding how to derive the optimal output even without knowing the ingredients in the secret sauce.

Cite as: Elizabeth Adelman, What Don’t You Know and How Will You Learn It?, JOTWELL (February 19, 2018) (reviewing Susan Nevelow Mart, The Algorithm as a Human Artifact: Implications for Legal [Re]Search, 109 Law Libr. J. 387 (2017)), https://lex.jotwell.com/dont-know-will-learn/.