Tag Archives: Librarianship and Legal Technology
Apr 7, 2026 Kristina NiedringhausLibrarianship and Legal Technology
Depending on who you ask, AI will either free us from the drudgery of our everyday lives, take our jobs, or wipe out humanity. It’s nearly impossible to glance at legal news without reading something about AI. There is, however, a lot more theorizing than actual data-driven research on how AI is working for (or against) the legal profession. However, Professor Lee Peoples reports on the results of his important study evaluating and comparing the performance of various specialized and non-specialized large language models (LLMs) in legal reasoning. Spoiler alert: it varies, and not necessarily how you might assume.
Before getting to the results, let’s examine how Prof. Peoples designed the study. Many first-year law students are taught to think like a lawyer using the IRAC method. As a refresher, this is a system using distinct steps to spot the Issue, identify the Rule, Apply the rule to the facts, and draw a Conclusion about the legal outcome. Prof. Peoples selected seven fact situations from a legal research and writing exercise book and anonymized to test beginning rule analysis, skilled rule analysis, beginning analogical reasoning, skilled analogical reasoning, beginning statutory analysis, intermediate statutory analysis, and skilled statutory analysis. Very importantly, Prof. Peoples told the LLMs not to train on the prompts used in the testing.
Prof. People’s study is thoughtfully and intentionally designed. For example, he explains, “LLMs’ statutory reasoning abilities were explored in more detail because previous studies have demonstrated LLMs’ tendency to hallucinate when analyzing statutes.” (Pp. 56-57.) In response, he has tested three skill levels of statutory analysis to tease out more specificity about LLMs’ capabilities in this area. Other important features of the study include temperature setting (to limit randomness), nucleus sampling (to set a threshold probability), using zero-shot prompting (without additional examples), and employing iterative prompting (such as instructing the LLM to process step by step or to use “chain-of-thought” reasoning). The study tested “Lexis+ AI, Anthropic’s Claude 3 Sonnet, Open AI’s CPT 3.5, Microsoft’s Copilot 365, and Google’s Gemini lightweight LaMDA” in April and May of 2024. (P. 57.)
Prof. Peoples assessed the results of the prompts using eight scoring categories. Six measured different aspects of legal analysis directly; one measured response to iterative prompting; and one measured whether the model hallucinated. Refer to the article for a full explanation of scoring but the categories are: “relied on sources as instructed,” “issue identification,” “stating the rule,” “applying the rule,” “reaching the correct conclusion,” “conclusion stated with certainty,” “correctly responded to the prompt to use chain of thought reasoning,” and “hallucination.” Prof. Peoples explains these categories and the scoring rubric in greater detail, but I think the important takeaway is the variability in performance, not just between models but also within the same model across different tests. This result may feel familiar. My very unscientific survey of more advanced AI users finds they almost uniformly prefer different LLMs for different types of tasks.
Based on the total scores across all tests, Claude won the day, with Lexis + AI trailing the pack. However, as you would expect, the results are more nuanced than the total scores. For example, Copilot outperformed the other models on the Beginning Rule Analysis test, with Claude performing second best but lacking some detail. You might expect a similar result in the Skilled Rule Analysis, but in fact, Claude performed best with Copilot performing worst. In the Skilled Rule Analysis, Lexis+ AI came in second, although Prof. Peoples noted that the Lexis+ AI response had less certainty than the other models.
The statutory analysis tests indicated greater difficulty with legal analysis. All the models performed reasonably well on the Beginner Statutory Analysis test. Interestingly, only Lexis + AI referenced an important state rule critical to the response that wasn’t mentioned in the fact situation. At the same time, Lexis + AI also cited an irrelevant rule that didn’t apply to the facts. This moderate success across all models progressively degraded in the Intermediate and Advanced Statutory Analysis tests.
I’ve frequently heard the AI lore that prompting an LLM to think in steps, or use chain-of-thought, should improve the results. Prof. Peoples’ study indicates that this may not always be true. His results showed that Claude, Copilot, and Gemini improved with a chain-of-thought prompt, with the improvement most pronounced in more complicated scenarios. Meanwhile, the results from Chat GPT 3.5 and Lexis+ AI did not see the same kind of improvement.
Hallucinations may be the most publicized of the legal malpractice disasters using AI. Surprisingly, in Prof. Peoples’ study, all of the models performed with zero or only one hallucination, except Lexis+ AI, which had a hallucination rate of 57%. Prof. Peoples notes, however, that a study of mostly specialized legal LLMs conducted in May 2024 found that Lexis+ AI had the lowest hallucination rate among the models tested. (Varun Magesh, et al., Hallucination Free? Assessing the Reliability of Leading AI Legal Research Tools.) Lexis+ AI’s performance in the study relative to non-specialized LLMs is surprising, but Prof. Peoples suggests this may be because the data universe used by Lexis+ AI is much smaller than that of the non-specialized models. It would be reasonable to assume that a model limited to legal materials might outperform the non-specialized LLMs because it is focused on the most relevant sources. However, Lexis+ AI was released only 6-7 months before the tests in this study. Results will likely change over time.
The most important takeaway of this study is not how the various models performed on these specific tasks but what to consider when using AI for legal analysis. In part, this is because the results of the exact same AI prompt change. As Prof. Peoples highlights, one of the issues for AI in legal work is that the results aren’t reproducible. The rule of law relies on results being consistent across similar situations. Precedent is a critical feature of American law. However, “the instability of answers created by LLMs complicates their usefulness for legal work and ability to think like a lawyer. Researchers who repeatedly input identical prompts to generative AI will never receive the same responses.” (P. 75.) (A recent article on inconsistency that may be of interest is Or Cohen-Sasson, Stochastic Justice: Legal Inconsistency by Human and AI, (2025).)
Another issue for AI, that seems intractable, is the lack of transparency often present regarding the algorithms underlying the system and the details of the information models on which they were trained or which they can access. This is protected as intellectual property and a trade secret, but leaves users mostly clueless about how the model works, what it prioritizes, and what information it is using as a basis for responses. This lack of understanding would be challenged, or at least queried in most other legal processes, but is often accepted when using AI tools.
Prof. Peoples’ study on the efficacy of LLMs for legal analysis should be required reading for law students and lawyers using AI tools, and can serve as a guideline for examining the performance of those tools.
Cite as: Kristina Niedringhaus,
Can your AI Think Like a Lawyer?, JOTWELL
(April 7, 2026) (reviewing Lee F. Peoples,
Artificial Intelligence and Legal Analysis: Implications for Legal Education and the Profession/em>, 117 Law Lib. J. 52 (2025).),
https://lex.jotwell.com/can-your-ai-think-like-a-lawyer/.
May 13, 2024 Kristina NiedringhausLibrarianship and Legal Technology
I usually avoid articles about generational differences because they too often veer into the “kids these days” territory. However, from her opening quote and introduction, I suspect that Prof. Schlinck and I have similar feelings about those types of articles. As she writes, “After all, if complaints about the younger generation’s “tyranniz[ing] their teachers” (referencing a quote from Plato, or someone) dates back to ancient Greece, then it may be time to see our students as occupying a glass half full.” (P. 272.) Prof. Schlinck’s article, OK, Zoomer: Teaching Legal Research to Gen Z, provides a brief explanation of generational theory before exploring aspects of Gen Z’s “peer personality.” She then translates those generational traits into pedagogical strategies for optimizing legal research instruction for today’s typical law student. While the suggested techniques are discussed in the context of research instruction, most of them can be extrapolated to other subjects and will be useful for teaching a variety of law school courses.
Prof. Schlinck’s article outlines ten pedagogical tactics that respond to the generational traits of Gen Z.
Explain the relevance to legal practice: Prof. Schlinck posits that if Gen Z’s reaction to the Great Recession is concern about employment and financial security, students want to know how what they are learning will benefit them in their practice. Explain why they are learning something. Legal research instructors are encouraged to explain the real-world costs of legal research and how becoming more efficient researchers will affect the bottom line.
Record short lectures for pre-class homework: Gen Z students who prefer short-format video learning will appreciate a flipped classroom approach that allows students to learn in chunks (no longer than 20 minutes) individually before class. Prof. Schlinck emphasizes that quality matters. Content, video, and audio should all be high quality. Embedded questions can be used to incentivize focus. This is a technique I use in my legal research course. Using recorded mini-lectures and quizzes before class has the added benefit of allowing time in class for assignments that simulate real-world research problems, they may face in practice but with me there to answer questions and provide guidance.
Redesign group work: Prof. Schlinck highlights that while Millennials tend to enjoy group work and collaboration, Gen Z prefers a more hybrid approach. She explains that while Gen Z students typically do enjoy collaboration, they prefer to first have time to learn the material on their own before working in a group. Gen Z students may also prefer to work by themselves on assignments that will count toward their grade.
Turn the research memo into the research email: Students need practice communicating the results of their research. Many legal research instructors use the construct of the research memo to the partner as a method of practicing this skill. However, Prof. Schlinck points out that this information is more often conveyed as an email rather than a more formal memo. But, if as suggested, Gen Z is uncomfortable communicating by email, this suggestion becomes doubly important.
Provide regular, and timely, feedback on formative assessments: One example provided by Prof. Schlinck of this type of assignment is a live critique. She explains that Gen Z law students experienced near-constant standardized testing throughout their K-12 education. In contrast, a live critique provides in-person interaction desired by Gen Z while also giving them needed practice receiving feedback. Prof. Schlinck recommends making a live critique assignment ungraded to eliminate one source of stress from the experience.
Relate course work to what they care about: Gen Z law students tend to be more engaged with social and political issues and Prof. Schlinck suggests harnessing this trait by partnering with legal organizations for real-world legal research experiences. Although not suggested by Prof. Schlinck, I suspect there might be some benefit, although lacking the real-world component, in designing research hypotheticals that go beyond fences over the property line or dog bites and present scenarios related to issues like climate change or public surveillance.
Embrace the search engine and internet research: Traditionally, law faculty have preached about the dangers of using tools like Google and misinformation on the internet. As Prof. Schlinck states so aptly, “Zoomers are going to use Google for legal research, no matter how many times they are told not to.” (P. 299.) We need to teach them how to use the tools more effectively and to think more critically about how they search and how they assess their results. Given the past few months, I would extend this strategy to say that we need to also embrace generative AI. It is here and firms expect our students to understand how to use it and it is our job to teach them. In many ways, prompt engineering in AI is like constructing a search. This seems like a logical extension of our role in teaching law students how to search effectively and critically evaluate their search results.
Teach the process, not the platform: A key tenet of effective legal research is that legal sources are interconnected, and we use that interconnectedness to be more efficient and effective researchers. Traditionally, this has been more obvious when looking at print resources. Prof. Schlinck argues that law students today exist in a world where they will rarely, if ever, use print resources. Accordingly, we should be teaching them the structure and links between sources regardless of format, rather than showing them the structure in print and expecting it to translate to electronic. She also observes that by doing this we are creating “technologically resilient” graduates who can navigate the ever-evolving research platforms successfully.
Teach Critical Legal Research and name it: Prof. Schlinck argues that law students should be deliberately and transparently taught Critical Legal Research. Critical Legal Research examines how the structure and organization of legal information is influenced by underlying biases and decisions that impact the practice of legal research. For example, there has recently been discussion and advocacy around identifying “slave cases” when they are cited in legal materials. One purpose of this effort is to demonstrate how the structure of legal information contributes to the continued citation of slave cases as good law (See for example the Citing Slavery Project). She argues that teaching Critical Legal Research teaches students how to think critically about information sources in a broader context as well and helps teach critical analysis.
Care, and show it: If Gen Z expects inclusion, diverse viewpoints, acceptance of self-care, and a respectful environment, Prof. Schlinck argues that can be achieved by demonstrating care through responsiveness, empathy, and seeking student feedback on how they are learning. She goes on to say that care, or passion, for the subject, may also be helpful here. “Subjects often perceived as boring–legal research included–can be engaging if the Prof. is excited about the material and the class.” (P. 304.)
Prof. Schlinck’s article is deft at drawing connections between the generational traits typically associated with Gen Z and pedagogical techniques that can produce the best results with Gen Z in the legal research classroom. Many of these techniques are also readily transferrable to other skills and doctrinal subjects in the law school curriculum. The article is also an excellent read for learning more about the traits and social context of the typical law student today. Many faculty adjusted teaching strategies for Millennials and there are some significant differences between Gen Z and Millennials. It is time to update law school pedagogy to respond to these changes.
Aug 11, 2023 Kristina NiedringhausLibrarianship and Legal Technology
Any law librarian who works with the public or teaches no- or low-cost legal research, or any attorney or law student using free resources to conduct research understands the wide gap in usability between fee-paid databases and most free, open-access legal resources. Focusing on statutory code research, Professor Darvil’s article, Increasing Access to Justice by Improving Usability of Statutory Code Websites, examines the need not just for access to statutory codes, but providing the information in a way that allows the user to find the law they need. Through the lens of website usability standards, Professor Darvil assesses state code websites and provides recommendations for how those websites can improve usability. Many states have created “Access to Justice” initiatives and commissions aiming to improve citizen access and experience with the legal system. Professor Darvil’s recommendations provide excellent guidance for those interested in improving the research experience and access to the law for everyone, including those without access to fee-based databases such as Lexis or Westlaw.
Inevitably, my legal research students are, at some point in the semester, treated to my soapbox speech about how equitable access to the legal system rests on the ability of any citizen, regardless of means, to access the law they are obligated to live under. If states care about access to justice issues, logically they must care about how they provide access to the law. Professor Darvil’s article provides an excellent discussion of the access to justice issues endemic in a legal system in which, particularly on the civil side, many litigants are self-represented and how those litigants are impacted, frequently negatively, by their inability to find the law.
The unique value of Professor Darvil’s article, however, is her assessment of the usability of state statutory websites and recommendations for improvement. Her evaluation of the state websites uses a standards-based approach. Nearly half the states and the District of Columbia have adopted the Uniform Electronic Legal Materials Act (UELMA), which seems a reasonable place to look for applicable standards. Unfortunately, as Professor Darvil notes, UELMA doesn’t address the usability of electronic legal materials. Lacking usability standards in UELMA, Professor Darvil turns instead to standards developed in the disciplines of human computer interaction (HCI) and user experience (UX). Experts in these fields explore how easily people can navigate a particular website interface and find what they are looking for. There are several sets of standards for assessing website usability and Professor Darvil uses the following standards from the U.S. Department of Health and Human Services (HHS).
Search Guidelines:
- Ensuring usable search results
- Designing search engines to search entire site or clearly communicate what part of the site is searched
- Making upper and lowercase search terms equivalent
- Designing search around user’s terms
Navigation Guidelines:
- Providing navigational options
- Differentiating and grouping navigation elements
- Offering a clickable list of contents
- Providing feedback on user’s location
(P. 133.)
Professor Darvil evaluates the statutory code websites of all 50 states and the District of Columbia hrough an exploration of the features available and some relatively simple searches for state laws and comparing the results to a 50-state survey. She then explains the results using multiple illustrative screenshots from a variety of state websites as well as appendices outlining the results for each guideline above in every jurisdiction.
Finally, Professor Darvil ends with recommendations for states seeking to improve their statutory code websites. These recommendations range from fairly basic to more complex. Suggestions for improving navigation include recommendations such as providing a table of contents, clickable search trails, and navigational buttons. Suggestions for improving searching are more complex and include items such as using a controlled vocabulary, providing context surrounding the search terms by indicating where the researcher is in the code, relevancy rankings, and instructions on how to best search. Professor Darvil points out that many of these recommendations are familiar to librarians and suggests that states should use librarians to help implement many of these suggestions.
This article provides a logical and thorough assessment of state statutory code websites, explains why usability matters for access to justice, and suggests ways that governments can improve usability. As she concludes, “Governments that are based on the rule of law have special duties to their citizens: transparency, accountability, and reasonable access to their laws. […] When state governments do so, they promote access to justice and the rule of law.” (P. 153.)
Jun 3, 2022 Kristina NiedringhausLibrarianship and Legal Technology
Law students and attorneys often wonder if it matters whether they use United States Code Service (USCS), a Matthew Bender publication also available on Lexis+, or United States Code Annotated (USCA), a Thomson Reuters publication also available on Westlaw Edge. In 1L legal research classes, I often field the question about what the differences are between the publications. “They are both the US Code, right?” is a common refrain. The traditional lore, passed on to law students, was that USCA strove to provide an annotation for every relevant case while USCS strove to provide annotations to the “best” cases. Accordingly, USCA was said to contain a greater number of annotations and USCS was more selective. I recall being taught this in law school. However, like much folklore, the foundations for this assertion are becoming lost with time and it is unclear whether this represents the current state of the two annotated codes. The product page for the print edition of USCA states that the set has “comprehensive case annotations.” Similarly, the product page for the print version of the USCS states that it is “the most comprehensive” set. We are left to determine for ourselves the meaning of “comprehensive.” We will talk more about this later, but it is important to note that USCS case annotations include administrative decisions while USCA case annotations do not.
Ms. Marcum’s research explores whether there is a significant difference between the annotations found in USCA and USCS. Does it matter which annotated code the researcher uses? Should a thorough researcher use both? Most people would expect some unique case annotations in each annotated code with a fair amount of overlap between the two sets. The surprising results were that out of 9164 case annotations for 23 statutes, 6748 of the annotations were unique to either USCS or USCA. Of the 9164 case annotations, 73.6% of them were unique and only listed in one of the annotated codes. Most researchers will be shocked by the small amount of overlap between the two publications. One could anticipate that this percentage would be statistically significant, and Ms. Marcum confirms this is true using a Wilcoxon T test.
Going deeper into the numbers, of the 6748 unique case annotations, 3852 were unique to USCA and 2896 were unique to USCS. Of the case annotations in USCA 76% were unique while 70.5% of the case annotations in USCS were unique. Back to those administrative decisions that are included in USCS but not in USCA. Those administrative decisions have been included in the data. Ms. Marcum explains her research methodology in detail and included the administrative decisions in the data “because they are publisher-neutral, government information that both codes could have included if they so desired.” (P. 210.)
Why does this matter? It is an additional data point available to help a researcher decide whether to use USCA, USCS, or both. It also adds to the information available to information professionals making decisions about whether to purchase one, or both, of the annotated codes. Neither the print sets, nor their related electronic research systems are inexpensive. There is a strikingly limited amount of empirical research, either quantitative or qualitative, studying legal research tools. Ms. Marcum’s research is an important addition to the knowledge we have about the tools lawyers, law students, and law librarians use every day. For example, there are only two other comparisons of case annotations available. A Comparison of Case Law Results between Bloomberg Law’s ‘Smart Code’ Automated Annotated Statutes and Traditional Curated Annotated Codes, is an unpublished draft paper by Jason Zarin from 2017 available at SSRN (Social Science Research Network), https://ssrn.com/abstract=2998805 or http://dx.doi.org/10.2139/ssrn.2998805. The other is four decades old, Jeanne Benioff, A Comparison of Annotated U.S. Codes, 2 Legal Reference Services Q. 37 (1982). In fact, very few comparisons of any aspects of major legal research products exists. Some notable exceptions are works by Susan Nevelow Mart such as The Algorithm as a Human Artifact: Implications for Legal [Re]Search, The Case for Curation: The Relevance of Digest and Citator Results in Westlaw and Lexis, and The Relevance of Results Generated by Human Indexing and Computer Algorightms: A Study of West’s Headnotes and Key Numbers and Lexis’s Headnotes and Topics (102 Law Libr. J. 221 (2010)). Also of note is research by Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy which I reviewed on Jotwell. Given the cost of major legal research databases, more evaluative comparisons of their features and tools would be beneficial to the legal profession.
Research like Ms. Marcum’s provides support for evidence-based decision making by researchers and information professionals when making decisions about what resources to purchase and use. It is imperative that more scholars undertake empirical research analyzing and comparing legal research tools relied upon by the legal profession.
Apr 6, 2021 Kristina NiedringhausLibrarianship and Legal Technology
Most in legal academia would consider citation of their law review article in a judicial opinion an honor. However, most probably also remember Chief Justice Roberts’ 2011 comment that an article about “the influence of Immanuel Kant on evidentiary approaches in Eighteenth Century Bulgaria or something…isn’t of much help to the bar.” The Chief Justice’s comment may leave you wondering how often judicial opinions have cited law review articles and what factors might make your article into a rare unicorn. Mr. Detweiler answers these questions and more in May It Please the Court: A Longitudinal Study of Judicial Citation to Academic Legal Periodicals.
Mr. Detweiler has compiled a list of state and federal court citations to legal academic journals from 1945-2018 and mapped them as a proportion of all reported opinions and by total number annually. He tracks the ebb and flow of citations through the years and makes interesting observations about what may influence increases and decreases in citation frequency. But he doesn’t stop there. His research then compares citation frequency from 1970-2018 of articles in Harvard Law Review and Yale Law Journal with flagship journals from sample schools in each tier of the U.S. News rankings. The article also includes a scan of the history of academic law journals, the first citations of journals, and the explosive growth of journals starting in the 1970s.
The article begins with a brief history of student-edited law reviews and their relatively slow acceptance by the judiciary. Mr. Detweiler notes Chief Justice Taft’s complaint about his colleagues “‘undignified’ use of law review material in their dissents.” But change was already underway. The next Chief Justice, Chief Justice Hughes, labeled law reviews as the “fourth estate of the law.” Mr. Detweiler then moves on to examine all citations of academic law journals from 1945-2018 in reported state and federal cases. Graphs included in the article illustrate changes over time. The percentage of cases citing law reviews shows a rise from 1.8% in 1945 to almost 5% in the mid-1960s/1970s with a dip mid-decade of about 0.5%. Mr. Detweiler notes that the peak of 4.9% is a 172% increase in citing cases over the rate in 1945. After the peak in the mid-1970s, the percentage of opinions citing articles declines over the next two decades. Since the mid-1990s, the percentage has leveled out some, fluctuating between 1.5% and nearly 2%, reaching 1.8% in 2018. A similar graph models the growth in absolute numbers of opinions citing law review articles with a similar increase and then decline. Mr. Detweiler attributes a portion of the percentage decrease in the early 1980s to the number of reported opinions increasing more quickly than the number of citing cases.
Mr. Detweiler posits several possible causes for the decrease in the percentage of cases citing law reviews from its heyday in the mid-1960s/1970s to its current level. Two of the most compelling are technological advances and changes in the content of academic legal scholarship. Both Lexis and Westlaw launched in the mid-1970s leading to easier access to case law, which was also growing in breadth. Academic law reviews were incorporated more slowly into the legal research systems and didn’t have more expansive coverage until the mid-1990s. Judges and their clerks could easily access case law (especially binding precedent) directly instead of relying on scholarly works.
Mr. Detweiler also highlights a shift, beginning in the 1970s, at higher-ranked law schools away from more traditionally doctrinal scholarship toward interdisciplinary work and new areas of scholarship that were not as directly applicable to the everyday work of attorneys and judges. This point becomes important when we view differences in citation rates between flagship law journals at higher-ranked and lower-ranked law schools.
Part II of the article examines how the percentage of citations varies from elite law schools (represented by Harvard and Yale), top 14 schools, Tier I, Tier II, Tier III, and Tier IV schools. (Mr. Detweiler explains the selection of the exemplar schools in the methodology.) The data shows, unsurprisingly, a strong prestige factor in the law journals cited in cases. Harvard Law Review was the clear leader with a significantly higher percentage of citations than the next highest, Yale Law Journal. Although the prestige factor is still apparent, the rate of opinions that cited Harvard Law Review or Yale Law Journal has steadily declined from about 34% in 1970 to approximately 14% in 2018. Similarly, the percentage of opinions citing top 14 law schools fell from 1970 to 2018. During the same period, the percentage of opinions citing Tier 1 law journals stayed relatively stable. The rates of opinions citing Tier II and Tier III schools had more extreme variations from year to year, but the trend has been a gradual increase. Similarly, opinions citing Tier IV flagship law journals have seen a gradual increase over time while still the smallest percentage. The elite advantage is still present but is not as great as it once was.
Why has the gap narrowed? Mr. Detweiler points to some of the same factors highlighted in the decline of the percentage of reported opinions citing academic law journals. One of these is the rise of computer-assisted legal research (CALR) and the ease with which researchers can search and retrieve articles from a pantheon of academic law journals, not just the elite journals. A related point is the explosion in the number of academic law journals. Mr. Detweiler points out that 132 journals were indexed by the Current Index to Legal Periodicals in 1970, but today Lexis and Westlaw have approximately 1000 titles in their law journal databases. He hypothesizes that the increase in the number of journals is diluting the percentage of citing cases that any one journal is capturing.
While discussing judicial citation of academic legal journals, Mr. Detweiler contextualizes changes in citation patterns within changes in the legal academy and the court system. He explains in detail his well-reasoned methodology for each stage of his research, including documenting Lexis search strings longer than most of us have ever contemplated. His article is an interesting foray into academic legal scholarship and its influence, or lack of influence, over judicial precedent.
Author’s Note: Mr. Detweiler provides supplemental tables along with the article. Available tables are 1) Citations to all law reviews ; 2) Top 14 Law Reviews; 3) Citations to Tier I and Tier II law reviews; and 4) Citations to Tier III and Tier IV law reviews.