Tag Archives: Legislation
Jan 7, 2026 Ian StephensLegislation
Textualism confounds the linkage between jurisprudential methodology and philosophy. In popular conception, a judge’s choice of interpretive tools is bound to be tightly intertwined with the ideological flavor of that judge’s outcomes: originalism or strict constructionism lead to conservative results, while living constitutionalism or purposivism lead to liberal results. Textualism, once pilloried as a tool of the right, is now making a play at escaping that perceptual mold. As the methodology has become more and more ubiquitous, it has taken on a new character. It now boasts of broad acceptance throughout the judiciary and even such conservative paragons as Neil Gorsuch and Clarence Thomas have sometimes followed textualism toward what might seem rather progressive outcomes. Muldrow v. City of St. Louis, decided last term, is sure to join cases like Southwest Airlines v. Saxon, New Prime v. Oliviera, and the quintessential Bostock v. Clayton County in the pantheon of progressive textualism.
Missouri’s Sandra Sperino makes this point in her forthcoming article, When is Discrimination Harmful?, where she explores Muldrow’s dominating use of textualism to drastically expand the reach of employment discrimination laws. But, in its quest to excise subjective applications of Title VII, the Muldrow Court opened just as many doors as it closed—leaving lower courts to struggle with applications that are no less subjective than the standard Muldrow abandoned. What’s the root of this inextricable challenge? Sperino posits that the concept of “harm,” a foundational component of anti-discrimination law, inevitably “rests on judgments that cannot be answered through the statutory text alone.”
Muldrow concerned the transfer of a St. Louis police sergeant to a less prestigious and subjectively rewarding position because of her sex. Most circuits had held that transfers without a change in rank and pay like this do not constitute “harm” within the meaning of Title VII. Instead, actionable harm must rise to the level that it is “materially adverse” to the plaintiff. Muldrow rejects that standard, offering instead that the plain text of Title VII requires nothing more than a showing of “some harm.”
But, Professor Sperino and others, myself included, have noted that “some harm” is hardly more definite than “materially adverse.” The new standard is certainly lower, but how much lower is not clear, and the statutory text seems insufficient to reveal the answer. My own take on Muldrow would look beyond the text toward corresponding legal conceptions of harm.
Professor Sperino agrees. But from Muldrow she offers a more universal approach. Title VII, she suggests, was designed to be broad—and even ambiguous in parts—so as to leave room to “strike at the entire spectrum” of discrimination. The some-harm standard demonstrates this malleability. So, while textualism can answer some of the questions Title VII poses, stopping there leaves the job only half done. Taking the next step further, courts should look to the history and purpose of Title VII for answers.
In many ways, the modern approach to statutory interpretation, that leaves extraneous sources and policy off to the side, is superior. Done right, it avoids the manipulation, bias, and subjectivity that an open universe can introduce. But one of its biggest pitfalls is that past Congresses often wrote laws with the assumption that courts would look more broadly at the law’s history and purpose. In the case of Title VII, Sperino explains, this meant Congress wrote open-endedly—expecting that courts would apply the law as needed to strike broadly against discrimination. Indeed, many of the core tests courts apply to Title VII were developed by reference to its objective, not its text.
Sperino makes a compelling case for a more comprehensive approach to discrimination law. In fact, I might even go a step further to argue that Title VII and its sister-statutes double as one of a class of background statutes that provide context to the law more broadly. As courts grapple with the unanswered questions Muldrow left, they should take Professor Sperino’s lesson that those questions are supposed to be there. And they should take heed that answers exist, not just in the statute’s text, but in its character.
Jul 11, 2025 Anita KrishnakumarLegislation
James J. Brudney & Lawrence Baum,
Does Textualism Constrain Supreme Court Justices?, available at
SSRN (Feb. 3, 2025).
Textualist jurists and scholars have long contended that their preferred interpretive approach is superior to competing approaches because text-based analysis limits judicial discretion and constrains judges. Indeed, the late Justice Scalia declared in his book, Reading Law: The Interpretation of Legal Texts, that a textualist interpretive approach would “narrow the range of acceptable judicial decision-making” and “curb—even reverse—the tendency of judges to imbue authoritative texts with their own policy preferences.” Correspondingly, textualists long have criticized legislative history as an illegitimate interpretive tool that “has something for everyone” and “greatly increases the scope” of judicial manipulation of statutory meaning to suit the judge’s ideological preferences. To date, these claims have gone largely untested, although several scholars have offered anecdotal evidence suggesting that textualism does not, in fact, constrain judges all that much.
Enter Professors Brudney and Baum, who marshal an impressive dataset of 660 statutory decisions involving labor and employment law statutes decided between 1969 and 2024 in order to measure empirically how well textualist interpretive tools constrain judicial decision making. The result is an article rich in both empirical and doctrinal analysis of liberal and conservative justices’ use of textual canons, legislative history, and legislative purpose to reach interpretive outcomes consistent (or inconsistent) with their ideological preferences. Because their dataset is so broad—covering 54 terms’ worth of cases—Brudney and Baum are able to document historical changes and draw historical comparisons that other scholars have only been able to gesture at anecdotally.
The authors report several important findings, summarized below:
First, although this will surprise no one, Brudney and Baum document dramatically and convincingly just how much the Court has increased its reliance on textualist tools, and correspondingly, decreased its reliance on purposivist and intentionalist tools between 1969 and 2023. For example, majority opinion rates of reference to dictionary definitions jumped from 1.0% during the 1969-1976 terms to 30.6% during the 2014-2023 terms; rates for language canon usage more than tripled from 14.6% during the 1969-1976 terms to 47.2% in the 2014-2023 terms; while rates for legislative history and purpose plummeted from 48.5% (history) and 84.5% (purpose) during the 1969-1976 terms to 13.9% (history) and 45.8% (purpose) during the 2014-2023 terms. Although I study the Court’s statutory cases closely, I found the magnitude of these increases and decreases stunning.
Second, the authors found that during the Rehnquist and Roberts Courts, both conservative and liberal justices reached outcomes that were consistent with their ideological preferences more often when they authored opinions that employed ordinary meaning than when they did not. (The sole exception was that during the Burger Court, liberal justices reached conservative outcomes more often when they employed ordinary meaning than when they did not). Thus, the authors conclude that there is no empirical evidence that ordinary meaning analysis—one of the touchstones of textualist interpretation—constrains judges to decide cases in a manner inconsistent with their ideological preferences, and there may even be some evidence that on the modern polarized Court ordinary meaning analysis enables judges more freely to adopt statutory constructions consistent with their ideological preferences.
Third, Brudney and Baum find interesting temporal changes in the relationship between judicial reliance on legislative history and the ideological valence of an interpretation. That is, the data reveal that during the Burger Court, authoring justices (especially conservative justices) were “substantially” more likely to reach an interpretive outcome inconsistent with their ideological preferences when they invoked legislative history than when they did not. However, during the Rehnquist and Roberts Courts, this pattern flipped for conservative justices—who were more likely to reach conservative outcomes when invoking legislative history than when not citing such history. For liberal justices, the rates of reaching conservative outcomes stayed almost the same whether they invoked legislative history or did not invoke such history (although liberal justices were slightly more likely to reach a conservative outcome when relying on legislative history than when not doing so). Brudney and Baum observe that in the modern era, the textualist critique that legislative history is easily manipulable has become a self-fulfilling prophecy—and they posit that textualist jurists may be more likely to use legislative history strategically in the modern era because of their jaundiced view of the manipulability of this interpretive resource.
There are other, more nuanced, empirical findings sprinkled throughout the paper, but I will leave those for readers to discover themselves.
To close, Brudney and Baum also provide a series of close doctrinal comparisons of “dueling” majority and dissenting opinions that both invoke legislative history as well “dueling” opinions that both invoke ordinary meaning. The upshot of their doctrinal analysis is that the justices duel over ordinary meaning just as much, and in many of the same ways, as they duel over legislative history. In other words, the infamous textualist critique that “there is something for everyone” in the vast legislative history of a statute is just as true for ordinary meaning analysis. Brudney and Baum thus conclude that “reliance on ordinary meaning allows for judicial discretion to the same substantial degree and along exactly the same categorical lines as when justices rely on legislative history” and that “there is every reason to conclude that the stock critique of legislative history (the risks of picking out your friends in a crowd) is comparably applicable to textual analysis.”
In short, Does Textualism Constrain Supreme Court Justices? provides much-needed empirical testing of one of textualist interpretive philosophy’s key claims. It is, of course, just one article, and much more work needs to be done in this area, but Brudney and Baum provide an admirable and welcome first foray into tackling this important empirical question. Anyone interested in statutory interpretation should read this article with interest!
Sep 5, 2024 Anita KrishnakumarLegislation
Margaret H. Lemos & Deborah A. Widiss,
The Solicitor General, Consistency, and Credibility, 100
Notre Dame L. Rev. __ (forthcoming, 2024), available at
SSRN (March, 25, 2024).
In The Solicitor General, Consistency, and Credibility, Professors Maggie Lemos and Deborah Widiss provide an eye-opening deep dive into an increasingly common—and oft-criticized—practice engaged in by the Solicitor General’s Office (OSG): rejecting a legal argument that was offered on behalf of the United States in prior litigation. Such flip-flops by the SG’s office have received considerable attention in recent years, as shifts in presidential administrations have produced a number of high-profile reversals that have, at times, garnered open criticism from the U.S. Supreme Court. The conventional wisdom posits that such OSG reversals are undesirable and pose a threat to the SG’s credibility with the Court. Lemos & Widiss seek to turn that wisdom on its head, arguing that there are often good reasons for the OSG to reverse course and urging courts to make a more nuanced assessment of the circumstances surrounding a reversal before deeming it problematic.
In order to better understand how and why the SG’s office engages in litigation flips, the authors compiled an original dataset of 130 cases dating from 1892 to the close of the Court’s 2022 Term that contained such reversals. Their goal was to provide both a descriptive account of litigation flips and a normative argument for why (and when) the Court’s skepticism of such flips is itself problematic. To that end, the authors offer the following taxonomy, or categories, of OSG flips: (1) flips that are due to changes in presidential administration; (2) flips that result from the fact that the government often wears “two hats”–such that it may have taken one position in litigation involving one agency, and a different position in litigation involving a different agency or that it may have been acting as an employer in one lawsuit but as a regulator in a later lawsuit; (3) flips that arise as a result of changed factual or legal developments, including on-the-ground experience with the relevant legal regime, or intervening changes in statutes, regulations, or judicial interpretations; or (4) flips that result simply from “zealous advocacy”—or efforts to obtain the best possible outcome for the client in a particular case.
After detailing the circumstances under which OSG flips typically occur, the authors turn to addressing what precisely courts seem to find so problematic about such flips. In this section, Lemos & Widiss offer several thoughtful theoretical guesses regarding the potential causes for the Justices’ discomfort with OSG flips. These include, for example, (1) the modern Court’s formalist judicial philosophy, which assumes there is a single “correct” answer for each legal question and accordingly views the rejection of one’s former position as duplicitous; as well as (2) the possibility that the Justices view the OSG as a trusted advisor and regard litigation flips as a sign of carelessness or a propensity for error that renders the advisor less trustworthy.
Lemos & Widiss ultimately reject the above reasoning, arguing that judicial disapproval of flips is usually misplaced—and that, in fact, the ideological flips that tend to draw the most criticism often are the ones that are most justified. Specifically, the authors argue that the OSG is a source of important information to the justices—including information about how government programs work in practice. Citing earlier work by David Strauss, they note that the OSG is in a position to bring to the Court’s attention the effects that legal rules and decisions are having on the ground. In other words, the authors suggest that OSG flips need not signal a lack of care or “error,” but could instead reflect attentiveness to changed factual or legal circumstances—and that in so flipping positions, the OSG may be serving the Court well by making the Justices aware of new developments that justify a shift in legal rules.
Professors Lemos & Widiss conclude by noting that litigation reversals by the OSG can—and often do—reflect a principled effort to understand the law in light of current norms and needs. And they urge the Court to take a more nuanced approach to evaluating such reversals, rather than adopt a knee-jerk view that all OSG flips are problematic and should be treated skeptically.
In the end, The Solicitor General, Consistency, and Credibility provides novel insights into how, when, and why OSG flips occur—as well as persuasive arguments about why such flips are not uniformly (or even predominantly) bad. The article is a must-read for anyone who is interested in the OSG and the role it plays in Supreme Court litigation.
Cite as: Anita Krishnakumar,
When the Solicitor General’s Office Flip-Flops, JOTWELL
(September 5, 2024) (reviewing Margaret H. Lemos & Deborah A. Widiss,
The Solicitor General, Consistency, and Credibility, 100
Notre Dame L. Rev. __ (forthcoming, 2024), available at SSRN (March, 25, 2024)),
https://lex.jotwell.com/when-the-solicitor-generals-office-flip-flops/.
Dec 20, 2023 Anita KrishnakumarLegislation
Jesse M. Cross,
The Fair Notice Fiction, 75
Ala. L. Rev. __ (2023) available at
SSRN (April 21, 2023).
In The Fair Notice Fiction, Professor Jesse Cross provides a much-needed deep dive into one of modern textualism’s core tenets—that giving statutes their ordinary meaning puts people on notice about their legal obligations and therefore promotes the rule of law value of fair notice to the public. The claim to promote fair notice is one that textualism long has asserted, but it has taken on a new importance in the last few years as textualist Justices have come to dominate the modern Court and to loudly proclaim their commitment to identifying a statute’s “original public meaning.”
The Fair Notice Fiction seeks to debunk this core textualist justification. Professor Cross’ central critique is that the idea of providing fair notice to the public through statutory text has always been a fiction—for at least two reasons. First, in the modern era, the reading of statutory text is a “language game accessible only to legal elites.” (P. 1.) Second, throughout most of history, literacy has been low, texts have been scarce, and language barriers have abounded—even in those early democracies often touted as providing fair notice of statutory meaning to the public.
The first substantive section of Cross’s article explains in detail why meaningful fair notice cannot exist in the modern era. Specifically, Cross argues that two structural features of modern federal law—(1) its length; and (2) its “nontransparent interconnectivity”—make it impossible for ordinary citizens today to read and comprehend statutory text. With respect to length, Cross provides many useful and stark data points, such as the fact that as of 2018 the U.S. Code was 60,000 pages long and that each page of the Code contains three-times as many words as a typical book page. Regarding nontransparent interconnectivity—Cross uses this somewhat unwieldy phrase to describe an important and underappreciated reality: Federal statutory law is filled with numerous nonexplicit, nonobvious points of interconnection, whereby the meaning of a provision in one statute (e.g., a Medicare statute) depends on a provision in another statute (e.g., a penalty statute that applies broadly throughout the U.S. Code, or the Religious Freedom Restoration Act, which likewise applies across the U.S. Code) but the second statute is nowhere mentioned in the statute at issue. Indeed, The Fair Notice Fiction outlines eight common categories of such non-transparent interconnectivity in federal law. Both of these sections, on length and interconnectivity, are incredibly valuable to anyone interested in how statutes operate on the ground—as both provide detailed information that has so far gone undocumented in the statutory interpretation literature. Ultimately, Cross argues that the complexities in modern federal law make it impossible for anyone other than the legal elite to be able to read and comprehend federal statutes—because only the legal elite possess what Cross calls the “regime literacy” to find and read statutes, let alone interconnected provisions that might bear on the meaning of the statute at issue.
Professor Cross then goes on to examine what fair notice meant in ancient Rome, early England, and the United States. In this section, he provides detailed historical accounts of just how inaccessible statutory law always has been to the general public—even during historical periods now touted as exemplars of public notice. With respect to ancient Rome, for example, Cross debunks the popular myth of public notice via posted tablets by noting that (1) most citizens lived in rural areas, while public postings were made in urban centers; (2) most laws were posted for only 30 days, and it was difficult for even legal actors to access older statutory texts; (3) while thousands of laws were enacted, only a small percentage of these were displayed to the public; (4) widespread illiteracy rendered it impossible for most ordinary citizens to read even those laws that were posted; and (5) language barriers compounded these problems as statutes were posted only in Latin, although Greek was a major competing language and a dozen or more languages were spoken throughout the Roman empire.
Professor Cross concludes by recommending that scholars and jurists re-examine their commitment to text-based notice and focus instead on “active investments in informing the public about the content of laws”—through intermediaries who possess “regime literacy” and can help ordinary people understand what a statute means. (P. 78.) Cross does not go into great depth regarding who should serve as such intermediaries, but he does mention administrative agencies as well as the “Navigator” program that designated individuals and organizations to help ordinary citizens navigate the new insurance landscape created by the Affordable Care Act.
In the end, Professor Cross’s article provides valuable and insightful information about the nuances of both federal statutory law and the history of fair notice in early democracies. The article is a must-read for anyone who wishes to understand what fair notice has meant historically—and what it realistically can look like in the modern era.
Feb 2, 2023 Eli NachmanyLegislation
A couple of decades ago, Oakland Athletics general manager Billy Beane revolutionized baseball. In constructing the Athletics’ roster of players, Beane employed analytics and data to exploit market gaps in the Major League Baseball labor economy—an innovative strategy. It worked. In 2002, Beane’s Athletics won over 63% of their games and easily made the playoffs on a shoestring budget. Michael Lewis’s book Moneyball—which later became a movie—chronicles the 2002 Oakland Athletics season as a triumph of empirical analysis in baseball. But when other teams jumped into the fray, attempting to reorient their rosters entirely through analytics, many found limited success. The key insight to be gleaned from Moneyball is that analytics has a place in roster construction; at the same time, the last twenty years of baseball show that analytics are not everything.
In Testing Textualism’s “Ordinary Meaning”, Professor Tara Leigh Grove resists the empiricists’ takeover of a wholly different sport: interpretation of statutory text. Professor Grove begins her piece by noting that “[s]cholarship on statutory interpretation has taken an empirical turn.” In particular, scholars have employed empirical methods—e.g., surveys—to ascertain “ordinary meaning.” For these commentators, “ordinary meaning” is an empirical fact, “thereby justifying efforts to test textualism.” (Textualism is a theory of statutory interpretation, popularized by Justice Antonin Scalia, by which jurists interpret statutes according to the statutes’ “ordinary meaning.”) But in Professor Grove’s telling, “ordinary meaning” is also a legal concept that raises normative questions about law interpretation.
Amid a rush of empirical scholarship on textualism, Professor Grove takes up the mantle of law. The initial introduction of empirics into the practice of textualism was like Billy Beane’s initial success with the Moneyball-driven Athletics: a revolutionary innovation that changed the way we think about the enterprise of statutory interpretation. But a singular focus on empirical analysis obscures the true nature of the search for ordinary meaning, as Professor Grove ably demonstrates in her excellent Foreword to the George Washington Law Review’s Annual Review of Administrative Law. Without a recognition that “ordinary meaning” is a legal concept, one cannot properly ascertain what a law’s “ordinary meaning” actually is.
Professor Grove’s piece begins by explaining what she means when stating that “ordinary meaning” is a legal concept. In the first section, she contrasts “ordinary” and “technical” meaning. For example, although the “ordinary meaning” of the term “standing” to a lay reader might be the opposite of “sitting,” a well-trained lawyer will recognize that the term refers “to one requirement for launching a suit in federal court.” In some cases, courts will take the words of a statute in their ordinary sense; in others, they will read words in their technical sense. To make this determination, a court must “address certain legal questions,” including (1) which sources are relevant to determining the meaning of the text and (2) how the structure of the surrounding statute informs the inquiry.
Here, Professor Grove allows that the search for ordinary meaning has an empirical component. Ignoring this reality would be like signing a free agent baseball player without even glancing at his prior statistics. But as Professor Grove explains, “when legal disputes arise, a good deal of the search for ‘ordinary meaning’ will depend on legal considerations.” For this reason, textualists “adopt legal rules to choose which ordinary meaning is preferable,” with many prominent scholars and jurists emphasizing “the understanding of the objectively reasonable person.” Professor Grove notes the divides within the textualist movement about the content of these legal rules (a subject she has explored in other scholarship). Notwithstanding these divides, Professor Grove points out that textualists generally all treat “ordinary meaning” as a legal, normative—not empirical—inquiry.
Part II of Professor Grove’s piece lays out how some scholars have called this approach into question. These scholars’ method of determining “ordinary meaning” is to “identify empirically the use of a term or phrase that is the most common or popular.” One way to accomplish this task is a survey of the broader public. Professor Grove contends that such reliance on empirical methods is misguided for at least two reasons.
First, the “[s]cholarship that relies on survey methods appears to assume that the ‘ordinary meaning’ of a statutory provision depends on the views of the general public.” But when it comes to highly technical statutes that are aimed at federal agencies and regulated parties, that assumption may not hold true. Rather, Professor Grove submits that when one treats “ordinary meaning” as a legal concept, “the hypothetical reasonable reader can be adjusted to comport with the statute at issue.”
Second, the shift to empiricism risks conflating the modern era with the relevant interpretive timeframe. As Professor Grove asks: “how can one determine by surveying the public in 2022 the meaning of a statute enacted in, say, 1871, 1920, or 1964?” The nature of empirical work introduces what Professor Grove calls a “temporal complication” into the method’s workability when searching for “ordinary meaning.”
In Part III, Professor Grove notes the implications of her thesis. To start, textualists have some serious disputes to resolve about which legal tools should be used to discern ordinary meaning. Moreover, “[i]f ordinary meaning is a legal concept,” jurists may need to be more cautious when using “homey examples” (like Justice Scalia’s analogy of “using a cane” to “using a firearm” in his Smith v. United States dissent)—a frequent practice. Furthermore, Professor Grove acknowledges that her work raises some questions about “fair notice”—“itself a legal concept,” as she points out.
Without question, the empiricist turn in the statutory interpretation scholarship has deepened our understanding of how to ascertain “ordinary meaning.” Professor Grove merely warns against overreliance on empiricism in statutory interpretation, just as any modern baseball executive would caution against an overreliance on analytics in building a baseball roster. In describing “ordinary meaning” as a legal concept, Professor Grove refocuses statutory interpretation on the legal and normative issues it necessarily raises. Her piece is worth a read.