The Journal of Things We Like (Lots)
Select Page
Petra Molnar, Technology on the margins: AI and Global Migration Management from a Human Rights Perspective, 8 Cambridge Int’l L. J. 305 (2019).

As scholars of immigration law have been busy digesting the firehose of law and policy changes shooting out of the Trump administration, the use of new technologies at the border has been proliferating. Petra Molnar’s new article, Technology on the margins: AI and Global Migration Management from a Human Rights Perspective, reminds us that we must begin to pay closer attention to these developments and how they are deployed and regulated. Building on her excellent report, Bots at the Gate, the article provides a timely and useful roadmap of the relevant technologies and their very real risks. Though in the end Molnar is more sanguine than I about the potential of human rights law to mediate these risks, she rings a crucially important warning bell that we would all do well to keep an ear out for over the roar of the firehose.

The article begins, as it should, with a basic description of the “class of technologies that assist or replace the judgment of human decision-makers.” Automated decision-making has the potential to impact adjudication processes and outcomes by the full range of immigration actors, from border patrol to immigration courts. But what technologies are contained within this category? Molnar lists four: artificial intelligence, machine learning, automated decision systems, and predictive analytics, describing them as technologies that can be taught and can learn. Along with the description, she raises the key concern about the opacity of how exactly these decisions are made. As Frank Pasquale and others have asked, what is in that algorithm? Bias, perhaps? Molnar makes the important connection between the literature that critically examines automated decision-making and immigration adjudication. She notes that these technologies present the same risks as human decision-makers: accountability, bias, discrimination, error, and transparency, reminding us not to be fooled by the algorithm’s veneer of scientific objectivity.

From this definition, the article identifies four key areas of concern around the use of technologies in migration governance, offering a foundational map that calls for future researchers to engage. The first concern Molnar raises is the privacy breaches arising from data collection, ranging from the monitoring of mobile phone records to analysis of social media to geotagging. She next examines concerns around biometrics and consent in conditions of unequal bargaining power, offering the disturbing example of refugees in Jordan who were required to submit to iris scanning in order to receive their weekly food rations. Third, Molnar describes the use of technology in surveillance, explaining that the militarization of the border through the use of drones, robots, and remote sensors as border control pushes migrants to more dangerous routes that are more likely to lead to death and serious injury. Finally, the article explores automated decision-making, discussing ICE’s bail determination algorithm and “Extreme Vetting Initiative.” Each of these topics is crying out (loud enough to be heard over the firehose) for future research by Molnar and others.

The article concludes with an explanation of why these developments are particularly concerning in the migration management arena. The pace of technological innovation combined with the dearth of transnational regulatory frameworks present a potent recipe for abuse when baked into migration’s “discretionary space of opaque decision-making.” Molnar explicitly links these concerns with the use of migration management as an experimental forum with the use of data collection and tracking by genocidal regimes in Germany and Rwanda. She explains that there are no legally binding international agreements governing the ethical use of AI in migration management; the existing piecemeal guidelines and task forces are insufficient to face the task at hand. In particular, Molnar notes the need for specificity in legal standards to regulate AI in the migration space.

She suggests a human rights framework as a potential solution. Though less optimistic about this proposal given human rights law’s limited protections for administrative decision-making as well as its lack of specificity, I am indebted to Molnar for pushing forward the conversation and certainly agree that “a more rigorous global accountability framework is now paramount.” An alternative route might be to pressure big technology companies with specific standards set by an independent body, but the most important next step is to dive into the project that Molnar has laid out for immigration scholars. As she aptly notes, “the complexity of human migration is not easily reducible to an algorithm.” Those of us with the relevant substantive knowledge must now turn our attention to figuring out how to harness the potential firehose of these new technologies for the benefit of humans on the move, in all of their brilliant complexity.

Download PDF
Cite as: Jaya Ramji-Nogales, Watch This Space: AI at the Border, JOTWELL (March 20, 2020) (reviewing Petra Molnar, Technology on the margins: AI and Global Migration Management from a Human Rights Perspective, 8 Cambridge Int’l L. J. 305 (2019)),