In Access to Algorithms, Professor Bloch-Wehba unleashes both the First Amendment and FOIA (the Freedom of Information Act, along with its state counterparts) on algorithmic governance opacity. She argues that the law of access encompassed by FOIA and the free press clause can help promote a public debate of algorithmic decision-making by governments as well as provide avenues by which individuals–especially under-resourced individuals–might find redress for the sometimes catastrophic output of automated systems.
Algorithmic decision-making in the context of law enforcement (such as sentencing and bail) has been mapped by and Bloch-Wehba’s article touches on algorithmic criminal law determinations here as well. In addition, she unpacks algorithmic operations which affect government employees, specifically using “value-added assessments” to quantify teacher effectiveness, which has been challenged by teachers’ unions. But her attention to algorithmic denials of Medicaid benefits has heretofore received scant attention.1 Especially because elderly Medicaid applicants and recipients seeking long term care benefits are almost by definition under-resourced individuals, the opacity of algorithmic decision-making in this context deserves careful examination. But the very opacity of algorithmic operations which generate denials or reductions of public benefits presents a challenge for scholars as well as the citizens who bear the brunt of the automated decisions.
Governmental decision-makers tend to hold their algorithms close to their chest. Making matters worse, much algorithmic decision-making software is privately controlled. The trend is toward outsourcing. The private companies which develop the algorithms are obviously keen to retain their value and not simply leak their mechanics into the public domain. Secrecy is an even more acute problem when the algorithms themselves are outsourced to and controlled by private vendors. The problems of faulty decisions as well as bias embedded in the machine can be difficult to unearth and correct.
Bloch-Wehba examines two Medicaid examples in reported federal court opinions. In the first, APS Healthcare, Inc., a private “waiver administrator” slashed the Medicaid waiver benefits of a group of West Virginians with severe developmental disabilities. The company’s algorithm generated a budget which allocated benefits to individual Medicaid recipients on a year-by-year basis, using data from interviews and other assessment tools. The actual workings of the algorithm that slashed the plaintiffs’ benefits was proprietary. One plaintiff with cerebral palsy had her benefits cut from $130,000 to $72,000. As a result, she lost her community placement, declined, and became at serious risk to institutionalization.
Before an Administrative Law Judge, the West Virginia plaintiffs’ fair hearing was denied based on deference to the computer program–without investigation into the conclusions it had reached. Thankfully, the district court correctly perceived the procedural due process problems and reversed.
In a second Medicaid case arising in Idaho, plaintiffs challenged the decisions of a secret methodology to set individual budgets for home and community-based waiver benefits. In this case, the algorithm–“an Adult Budget Calculation Tool”–was government-sourced, but the state resisted disclosure of its methodology, claiming it was a trade secret. The state then offered a compromise: it would disclose the reasoning behind the plaintiffs’ benefit reductions but subject to a gag order that would prohibit redisclosure to anyone else. Although the plaintiffs ultimately prevailed, Bloch-Wehba identifies the invocation of a trade secret defense and the state’s “atomized disclosure” settlement offer as “highly problematic.” (P. 1279.)
After detailing the scope of the problem, Bloch-Wehba identifies a creative and effective solution. She emphasizes that the law of access provides a particularly useful tool in de-cloaking government methodologies to deny rights, property, or liberty. The law of access expands standing. It facilitates access to algorithms not only for the directly affected but also the general public–even scholars and journalists.
Here is a uniquely practical solution to a serious problem. One need not be a plaintiff to demand that algorithmic veils of opacity be lifted. She explains: “If the processes for government decision-making were already public, litigants would not have to fight tooth and nail to gain access to an explanation of why their benefits were slashed, their employment was terminated, or their release from prison was delayed.” (P. 1295.) To advance toward these practical ends, she maps the nuances of FOIA exemptions commonly in play in these sorts of contexts.
While undoubtedly practical, at the same time, Access to Algorithms resonates with foundational questions of value and justice. Although “algorithmic governance portends a new era in government decision-making, it must be accompanied by new forms of transparency to protect the vital role of public oversight in our democratic system.” (P. 1314.) Readers of her important article will surely agree.
- But see Katie Crawford & Jason Schultz, AI Systems as State Actors, 119 Colum. L. Rev. 1941 (2019) (taking note of algorithmic decision making in the context of disability benefits as well as Medicaid).