The Journal of Things We Like (Lots)
Select Page

Law students and attorneys often wonder if it matters whether they use United States Code Service (USCS), a Matthew Bender publication also available on Lexis+, or United States Code Annotated (USCA), a Thomson Reuters publication also available on Westlaw Edge. In 1L legal research classes, I often field the question about what the differences are between the publications. “They are both the US Code, right?” is a common refrain. The traditional lore, passed on to law students, was that USCA strove to provide an annotation for every relevant case while USCS strove to provide annotations to the “best” cases. Accordingly, USCA was said to contain a greater number of annotations and USCS was more selective. I recall being taught this in law school. However, like much folklore, the foundations for this assertion are becoming lost with time and it is unclear whether this represents the current state of the two annotated codes. The product page for the print edition of USCA states that the set has “comprehensive case annotations.” Similarly, the product page for the print version of the USCS states that it is “the most comprehensive” set. We are left to determine for ourselves the meaning of “comprehensive.” We will talk more about this later, but it is important to note that USCS case annotations include administrative decisions while USCA case annotations do not.

Ms. Marcum’s research explores whether there is a significant difference between the annotations found in USCA and USCS. Does it matter which annotated code the researcher uses? Should a thorough researcher use both? Most people would expect some unique case annotations in each annotated code with a fair amount of overlap between the two sets. The surprising results were that out of 9164 case annotations for 23 statutes, 6748 of the annotations were unique to either USCS or USCA. Of the 9164 case annotations, 73.6% of them were unique and only listed in one of the annotated codes. Most researchers will be shocked by the small amount of overlap between the two publications. One could anticipate that this percentage would be statistically significant, and Ms. Marcum confirms this is true using a Wilcoxon T test.

Going deeper into the numbers, of the 6748 unique case annotations, 3852 were unique to USCA and 2896 were unique to USCS. Of the case annotations in USCA 76% were unique while 70.5% of the case annotations in USCS were unique. Back to those administrative decisions that are included in USCS but not in USCA. Those administrative decisions have been included in the data. Ms. Marcum explains her research methodology in detail and included the administrative decisions in the data “because they are publisher-neutral, government information that both codes could have included if they so desired.” (P. 210.)

Why does this matter? It is an additional data point available to help a researcher decide whether to use USCA, USCS, or both. It also adds to the information available to information professionals making decisions about whether to purchase one, or both, of the annotated codes. Neither the print sets, nor their related electronic research systems are inexpensive. There is a strikingly limited amount of empirical research, either quantitative or qualitative, studying legal research tools. Ms. Marcum’s research is an important addition to the knowledge we have about the tools lawyers, law students, and law librarians use every day. For example, there are only two other comparisons of case annotations available. A Comparison of Case Law Results between Bloomberg Law’s ‘Smart Code’ Automated Annotated Statutes and Traditional Curated Annotated Codes, is an unpublished draft paper by Jason Zarin from 2017 available at SSRN (Social Science Research Network), https://ssrn.com/abstract=2998805 or http://dx.doi.org/10.2139/ssrn.2998805. The other is four decades old, Jeanne Benioff, A Comparison of Annotated U.S. Codes, 2 Legal Reference Services Q. 37 (1982). In fact, very few comparisons of any aspects of major legal research products exists. Some notable exceptions are works by Susan Nevelow Mart such as The Algorithm as a Human Artifact: Implications for Legal [Re]Search, The Case for Curation: The Relevance of Digest and Citator Results in Westlaw and Lexis, and The Relevance of Results Generated by Human Indexing and Computer Algorightms: A Study of West’s Headnotes and Key Numbers and Lexis’s Headnotes and Topics (102 Law Libr. J. 221 (2010)). Also of note is research by Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy which I reviewed on Jotwell. Given the cost of major legal research databases, more evaluative comparisons of their features and tools would be beneficial to the legal profession.

Research like Ms. Marcum’s provides support for evidence-based decision making by researchers and information professionals when making decisions about what resources to purchase and use. It is imperative that more scholars undertake empirical research analyzing and comparing legal research tools relied upon by the legal profession.

Download PDF
Cite as: Kristina Niedringhaus, Checking Annotations in both USCS and USCA: Necessary or Redundant?, JOTWELL (June 3, 2022) (reviewing Emily Marcum, Comparing the United States Code Annotated and the United States Code Service Using Inferential Statistics: Are Their Annotations Equal? 113 Law Lib. J. 207 (2021)), https://lex.jotwell.com/checking-annotations-in-both-uscs-and-usca-necessary-or-redundant/.